17:02:29 #startmeeting tailgate 17:02:29 Meeting started Thu Dec 10 17:02:29 2015 UTC and is due to finish in 60 minutes. The chair is gema_. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:33 The meeting name has been set to 'tailgate' 17:02:34 o/ 17:02:39 #topic rollcall 17:02:41 clee: hello! 17:02:53 hey gema_ :) 17:03:38 hi! 17:03:50 I was emailing spyderdyne, he seems to be waiting for us in a hanogut 17:03:53 hangout 17:04:10 malini: o/ 17:04:21 o/ 17:04:40 I was confused abt the time again - but tht's me :D 17:04:46 don't worry 17:05:14 so, I haven't prepared any particular agenda because it has been so long that I thought we should recap 17:05:38 #topic where are we, where are we going 17:06:19 is someone grabbing spyderdyne from the hangout? 17:06:28 I sent him an email saying we were here 17:06:32 cool 17:06:52 he is downloading an irc client and will be here soon 17:07:20 maybe we should give him and hockeynut a couple of mins 17:07:30 yes 17:07:45 meanwhile, how's life? 17:08:10 in the words of the great Wu Tang CLan, life is hectic :) 17:08:17 spyderdyne: lol, welcome :) 17:08:25 thanks 17:08:42 spyderdyne: we were going to do some recap and see where we are and where we are going 17:08:51 since it's been so long since the last meeting 17:09:02 i had sent out a weekly hangouts invite to at the very least have a weekly meeting reminder sent out 17:09:16 spyderdyne: the reminder works great for me 17:09:17 and had great difficulty with the simple arithmetic of setting the timezone 17:09:21 but I didn't see the hangout 17:09:28 tht is NOT simple arithmetic 17:09:32 spyderdyne: I use a website for that 17:09:34 there is a UTC trick that just almost works, but not reliably 17:09:43 actually, i googled it 17:09:43 lol 17:09:46 haha 17:10:09 timezones are bad enough, and then they change time every 6 months 17:10:13 yep 17:10:14 right 17:10:18 my brain is not complex enough to handle this :/ 17:10:21 to save whale oil reserves 17:10:58 the problem is that we work on complex patterns all day, so somethign simple like what time it will be next thrusday is complicated all of a sudden 17:11:29 absolutely 17:11:39 so maybe I will send an email 1 hour prior to the meeting every thursday 17:11:43 that way every knows it's coming 17:11:55 now that we have figured it out, we should be ok 17:12:00 that might help actually 17:12:14 because adding the meeting to the calendar if you are not invited directly , didn't prove easy for me either 17:12:22 ok, will do that 17:12:46 now, regarding our purpose and what we wanted to do originally 17:12:57 done 17:13:02 this is the meeting now in this irc? 17:13:04 has anyone checked what the openstack performance team are doing? 17:13:21 hockeynut: yes 17:13:22 hockeynut: we've always been on IRC except when we were trying to rush a presentation together 17:13:43 I'd say irc is better in the sense that there is written log 17:13:48 for the people that cannot make it 17:13:55 and taking notes is not necessary :) 17:14:03 ok, just wanted to be sure I'm in the right place :-D 17:14:18 and more importantly, you are at the right time ;) 17:14:26 +1 for IRC 17:14:33 I wud have missed this meeting otherwise 17:14:51 alright 17:14:57 i believe we are required to have IRC content just to let the rest of the community stay up to date with our meetings, but i will keep the hangout alive each week in case someone needs to present or anyone needs to discuss offline 17:15:08 spyderdyne: sounds good 17:15:24 so, i missed the last 2 or 3 meetings due to workload 17:15:30 but i have some updates 17:15:35 spyderdyne: we are all ears 17:15:39 (eyes) 17:16:07 i know some of you shied away when i presented a scale-centric project for the summit presentation, which is fine 17:16:51 we are currently working with red hat, intel, and others to provide scale testing components and methodologies to the openstack governance board 17:17:17 that sounds really good 17:17:18 our current efforts are to kick the tires on the mythos components, which i am happy to say are working for the most part 17:17:42 we decided to abandon the idea of scale testing instances that dont do anything 17:17:49 spyderdyne: is that tied up into the new openstack performance testing group? 17:17:53 i.e. cirros images, micro micros, etc. 17:18:09 i have had no contact with the new openstack perf testing group, 17:18:19 but our partners may be tied into it 17:18:22 ack 17:18:42 i am using off the shelf linuxbenchmarking.org components, and some open source projects 17:19:10 our next phase will be to enter the arena of a 1,000 hypervisor intel openstack data center 17:19:38 we will do testing with the other interested parties there, and after the 1st of the year it will double in size 17:19:42 spyderdyne that's the OSIC (Rackspace + Intel)? 17:19:46 yes 17:19:54 sweet 17:20:20 spyderdyne: what will double in size? 17:20:23 the cloud? 17:20:26 the goal is to find common ground, and provide some sanity to the scale testing and performance testing methodoligies 17:20:40 the 1,000 hypervisors becomes 2,000 17:20:43 :) 17:20:48 wow, awesome 17:21:24 this is really cool! 17:21:32 each group has things they are using internally, so it looks like it might be an arms race to see who has the most useful weapons for attacking a cloud and measuring the results 17:21:48 spyderdyne: is rally anywhere in the mix? 17:21:58 the mythos project is my contribution, and looks to be a nuclear device 17:22:18 rally is being used in heavily modified forms by cisco and red hat/ibm 17:22:54 we have the odin wrapper to chain load tests, and red hat has IBMCB which does something similar, but addresses some of rally’s shortcomings 17:23:14 it also behaves differently in that it spins up things like a normal rally test, 17:23:35 but leaves them there to run multiple tests against, and then tears them down as a separate step 17:23:56 currently rally performs build, test, and teardown (not so good at the teardown part…) for every test run 17:24:41 my team abandoned rally b/c we didnt feel like we could trust the results, and there is a magic wizard inside that does things we cant track or account for 17:24:42 :) 17:24:57 yeah, sounds sensible 17:25:59 we are at 1,000 instances now and working out some minor bugs with our data center 17:26:20 spyderdyne: and what metrics are you interested in? 17:26:24 time to spin 1000 instances? 17:26:25 then we will puch to as many instances as we have VCPUs for and see if we can shut down neutron 17:26:47 spyderdyne: you don't overcommit VCPUs? 17:26:56 (for the testing, I mean) 17:27:35 1. if neutron can be overloaded with traffic, what level of traffic breaks it, and what component breaks first 17:27:39 (SLA) 17:28:19 ack 17:28:24 2. what linuxbenchmaring.org measurements do we get on our platform for each flavor? 17:29:08 3. how many instances that are actually doing work of some kind can a hypervisor support 17:30:11 spyderdyne: and you determine what instances are doing work via ceilometer? 17:30:13 to this end we are setting instances on private subnets, having them discover all the other similar hosts on their private subnets, and having them run siege tests against each other using bombardment scale tests, pushing the result to our head node for reporting 17:30:25 or on the load of the host 17:30:25 we dont use ceilometer at all for this 17:30:30 ok 17:30:48 the head node has a web server that clients check in with every 10 minutes via cron 17:31:16 we write a script named .dosomething.sh and place it in the monitored directory 17:31:18 ok 17:31:47 ack, I understand how it works, thx 17:31:54 we then move it frmo .dosomething.sh to dosomething.sh and the clients check in over a 10 minute offset to see if there is something new for them to run. 17:31:58 if there is they run it 17:32:46 in our scale tests, mongodb has been unable to even keep up with more than a few thousand instances 17:33:01 so it hasnt been useful to us 17:33:01 spyderdyne: and what are you using mongo for? 17:33:08 ceilodb 17:33:11 ack 17:33:34 there are scalability issues with ceilometer + mongo 17:33:45 we've also been unable to use it with hundreds, not thousands 17:33:57 use it as in not only storing data but also mine it later 17:34:30 but that's an issue for another day, I guess 17:34:39 spyderdyne: very interesting all you guys are doing 17:34:44 i was hooping someone would put more support behind the gnocci project, but in our case with Ceph for all our storage, it may cause other issues using that as well 17:35:14 the issue is all the writes needed 17:36:47 it makes snese to distribute them, and swift makes sense to use, but if you are using your object as a block sotre as well it just shifts the issue from adding another db platform to scale out in the control plane to making sure the storage can keep up 17:37:49 spyderdyne: are results from this testing going to be available publicly? 17:38:03 i wanted to ask since it wasnt very well received, shoudl i remove all the mythos stuff from our team repo and just keep it in my github form now on? 17:38:22 the results fomr the intel testing will be made public 17:38:22 spyderdyne: you can do whatever you prefer 17:38:32 spyderdyne: 'well received' where? 17:38:33 spyderdyne: I don't think anyone is against your project 17:39:02 spyderdyne: the only issue was making us present this when we knew nothing about it, without you there 17:39:32 the results from our internal testing will be made public once we prove that our hardware + architecture blows the doors off of or at least provides a high water mark compared to other platforms 17:39:46 spyderdyne: my only concern was presenting this as how everybody tests their installation of openstack, when you were the only one who knew what it is 17:39:59 :) 17:40:12 spyderdyne: it has nothing to do with how good mythos is - which I am sure is pretty cool 17:40:20 absolutely 17:40:26 spyderdyne: you scared us 17:40:37 & I cant even handle timezones :D 17:40:38 lol 17:41:06 spyderdyne: but if there is anything we can do to make mythos better, I would love to help :) 17:41:31 yep, agreed, unfortunately my 8 hypervisors are already overloaded x) 17:41:43 and it doesn't look like they'd make much of a difference 17:41:43 :D 17:42:22 spyderdyne: I think the work you are doing is keeping this group alive 17:42:38 spyderdyne: and hopefully we'll find a way to chip in 17:42:51 +1 17:43:09 I will present what I have been working on next week 17:43:17 explain it in detail like you've been doing 17:43:55 i would ask that any of you who are able to spin up an ubuntu vm with vt support to check out the code and give it a spin 17:43:58 it's a jenkins plugin + a REST api for openstack on openstack functional testing 17:44:10 i could use the feedback and my docs definitely need improvement 17:44:28 spyderdyne: vt support? 17:44:39 vanderpool 17:44:44 hardware virtualization 17:44:55 yep, I can do that 17:45:07 how many VMs do I need to install/use mythos? 17:45:29 anyway, don't say anything 17:45:32 I will try and ask questions 17:45:37 you can improve documentation based on that 17:46:49 spyderdyne: although I am not sure it'll happen before Xmas 17:47:06 #action gema to try mythos on ubuntu with vt support 17:47:29 I will try to find an ubuntu with vt support 17:47:40 If I can get hold of one, I'll do too 17:47:41 1 ubuntu 15.04 vm, 2048MB ram, 100GB hdd (until i get packer to work and build a smaller source image) 17:48:09 spyderdyne: so you install it from an image? 17:48:24 as long as your openstack instnaces support hardware virt in libvirt (they are supposed to) then it shoudl work 17:48:39 spyderdyne: and how do we check that it works? 17:48:48 its using virtualbox (boo) and VB needs VT to run 64 bit guests 17:49:28 http://askubuntu.com/questions/292217/how-to-enable-intel-vt-x 17:49:43 #link http://askubuntu.com/questions/292217/how-to-enable-intel-vt-x 17:49:46 the script has a check built in and will fail gracefully if it isnt supported 17:49:58 ok 17:50:03 its getting very friendly 17:50:05 :) 17:50:20 i am close to 1,000 commits now 17:50:27 perfect 17:50:39 you've been busy! :D 17:51:11 alright, thanks so much for explaining it and we'll start helping with compatibility at least 17:51:16 and documentation reviews 17:51:34 we are 10 mins from end of meeting 17:51:40 malini: anything from you? 17:52:24 clee: ? 17:52:26 hockeynut: ? 17:52:32 I'm good 17:54:29 alright 17:54:36 spyderdyne: do you have anything else? 17:54:43 sorry - had to step away for a call 17:54:55 malini: no worries 17:55:12 alright, calling it a day then 17:55:22 thanks spyderdyne - this is really cool! 17:55:29 #endmeeting