17:02:29 <gema_> #startmeeting tailgate
17:02:29 <openstack> Meeting started Thu Dec 10 17:02:29 2015 UTC and is due to finish in 60 minutes.  The chair is gema_. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:02:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:02:33 <openstack> The meeting name has been set to 'tailgate'
17:02:34 <clee> o/
17:02:39 <gema_> #topic rollcall
17:02:41 <gema_> clee: hello!
17:02:53 <clee> hey gema_ :)
17:03:38 <gema_> hi!
17:03:50 <gema_> I was emailing spyderdyne, he seems to be waiting for us in a hanogut
17:03:53 <gema_> hangout
17:04:10 <gema_> malini: o/
17:04:21 <malini> o/
17:04:40 <malini> I was confused abt the time again - but tht's me :D
17:04:46 <gema_> don't worry
17:05:14 <gema_> so, I haven't prepared any particular agenda because it has been so long that I thought we should recap
17:05:38 <gema_> #topic where are we, where are we going
17:06:19 <malini> is someone grabbing spyderdyne from the hangout?
17:06:28 <gema_> I sent him an email saying we were here
17:06:32 <malini> cool
17:06:52 <gema_> he is downloading an irc client and will be here soon
17:07:20 <gema_> maybe we should give him and hockeynut a couple of mins
17:07:30 <malini> yes
17:07:45 <gema_> meanwhile, how's life?
17:08:10 <spyderdyne> in the words of the great Wu Tang CLan, life is hectic :)
17:08:17 <gema_> spyderdyne: lol, welcome :)
17:08:25 <spyderdyne> thanks
17:08:42 <gema_> spyderdyne: we were going to do some recap and see where we are and where we are going
17:08:51 <gema_> since it's been so long since the last meeting
17:09:02 <spyderdyne> i had sent out a weekly hangouts invite to at the very least have a weekly meeting reminder sent out
17:09:16 <gema_> spyderdyne: the reminder works great for me
17:09:17 <spyderdyne> and had great difficulty with the simple arithmetic of setting the timezone
17:09:21 <gema_> but I didn't see the hangout
17:09:28 <malini> tht is NOT simple arithmetic
17:09:32 <gema_> spyderdyne: I use a website for that
17:09:34 <spyderdyne> there is a UTC trick that just almost works, but not reliably
17:09:43 <spyderdyne> actually, i googled it
17:09:43 <spyderdyne> lol
17:09:46 <gema_> haha
17:10:09 <malini> timezones are bad enough, and then they change time every 6 months
17:10:13 <gema_> yep
17:10:14 <spyderdyne> right
17:10:18 <malini> my brain is not complex enough to handle this :/
17:10:21 <spyderdyne> to save whale oil reserves
17:10:58 <spyderdyne> the problem is that we work on complex patterns all day, so somethign simple like what time it will be next thrusday is complicated all of a sudden
17:11:29 <gema_> absolutely
17:11:39 <gema_> so maybe I will send an email 1 hour prior to the meeting every thursday
17:11:43 <gema_> that way every knows it's coming
17:11:55 <malini> now that we have figured it out, we should be ok
17:12:00 <spyderdyne> that might help actually
17:12:14 <gema_> because adding the  meeting to the calendar if you are not invited directly , didn't prove easy for me either
17:12:22 <gema_> ok, will do that
17:12:46 <gema_> now, regarding our purpose and what we wanted to do originally
17:12:57 <hockeynut> done
17:13:02 <hockeynut> this is the meeting now in this irc?
17:13:04 <gema_> has anyone checked what the openstack performance team are doing?
17:13:21 <clee> hockeynut: yes
17:13:22 <gema_> hockeynut: we've always been on IRC except when we were trying to rush a presentation together
17:13:43 <gema_> I'd say irc is better in the sense that there is written log
17:13:48 <gema_> for the people that cannot make it
17:13:55 <gema_> and taking notes is not necessary :)
17:14:03 <hockeynut> ok, just wanted to be sure I'm in the right place :-D
17:14:18 <gema_> and more importantly, you are at the right time ;)
17:14:26 <malini> +1 for IRC
17:14:33 <malini> I wud have missed this meeting otherwise
17:14:51 <gema_> alright
17:14:57 <spyderdyne> i believe we are required to have IRC content just to let the rest of the community stay up to date with our meetings, but i will keep the hangout alive each week in case someone needs to present or anyone needs to discuss offline
17:15:08 <gema_> spyderdyne: sounds good
17:15:24 <spyderdyne> so, i missed the last 2 or 3 meetings due to workload
17:15:30 <spyderdyne> but i have some updates
17:15:35 <gema_> spyderdyne: we are all ears
17:15:39 <gema_> (eyes)
17:16:07 <spyderdyne> i know some of you shied away when i presented a scale-centric project for the summit presentation, which is fine
17:16:51 <spyderdyne> we are currently working with red hat, intel, and others to provide scale testing components and methodologies to the openstack governance board
17:17:17 <gema_> that sounds really good
17:17:18 <spyderdyne> our current efforts are to kick the tires on the mythos components, which i am happy to say are working for the most part
17:17:42 <spyderdyne> we decided to abandon the idea of scale testing instances that dont do anything
17:17:49 <gema_> spyderdyne: is that tied up into the new openstack performance testing group?
17:17:53 <spyderdyne> i.e. cirros images, micro micros, etc.
17:18:09 <spyderdyne> i have had no contact with the new openstack perf testing group,
17:18:19 <spyderdyne> but our partners may be tied into it
17:18:22 <gema_> ack
17:18:42 <spyderdyne> i am using off the shelf linuxbenchmarking.org components, and some open source projects
17:19:10 <spyderdyne> our next phase will be to enter the arena of a 1,000 hypervisor intel openstack data center
17:19:38 <spyderdyne> we will do testing with the other interested parties there, and after the 1st of the year it will double in size
17:19:42 <hockeynut> spyderdyne that's the OSIC (Rackspace + Intel)?
17:19:46 <spyderdyne> yes
17:19:54 <hockeynut> sweet
17:20:20 <gema_> spyderdyne: what will double in size?
17:20:23 <gema_> the cloud?
17:20:26 <spyderdyne> the goal is to find common ground, and provide some sanity to the scale testing and performance testing methodoligies
17:20:40 <spyderdyne> the 1,000 hypervisors becomes 2,000
17:20:43 <spyderdyne> :)
17:20:48 <gema_> wow, awesome
17:21:24 <malini> this is really cool!
17:21:32 <spyderdyne> each group has things they are using internally, so it looks like it might be an arms race to see who has the most useful weapons for attacking a cloud and measuring the results
17:21:48 <gema_> spyderdyne: is rally anywhere in the mix?
17:21:58 <spyderdyne> the mythos project is my contribution, and looks to be a nuclear device
17:22:18 <spyderdyne> rally is being used in heavily modified forms by cisco and red hat/ibm
17:22:54 <spyderdyne> we have the odin wrapper to chain load tests, and red hat has IBMCB which does something similar, but addresses some of rally’s shortcomings
17:23:14 <spyderdyne> it also behaves differently in that it spins up things like a normal rally test,
17:23:35 <spyderdyne> but leaves them there to run multiple tests against, and then tears them down as a separate step
17:23:56 <spyderdyne> currently rally performs build, test, and teardown (not so good at the teardown part…) for every test run
17:24:41 <spyderdyne> my team abandoned rally b/c we didnt feel like we could trust the results, and there is a magic wizard inside that does things we cant track or account for
17:24:42 <spyderdyne> :)
17:24:57 <gema_> yeah, sounds sensible
17:25:59 <spyderdyne> we are at 1,000 instances now and working out some minor bugs with our data center
17:26:20 <gema_> spyderdyne: and what metrics are you interested in?
17:26:24 <gema_> time to spin 1000 instances?
17:26:25 <spyderdyne> then we will puch to as many instances as we have VCPUs for and see if we can shut down neutron
17:26:47 <gema_> spyderdyne: you don't overcommit VCPUs?
17:26:56 <gema_> (for the testing, I mean)
17:27:35 <spyderdyne> 1.  if neutron can be overloaded with traffic, what level of traffic breaks it, and what component breaks first
17:27:39 <spyderdyne> (SLA)
17:28:19 <gema_> ack
17:28:24 <spyderdyne> 2.  what linuxbenchmaring.org measurements do we get on our platform for each flavor?
17:29:08 <spyderdyne> 3.  how many instances that are actually doing work of some kind can a hypervisor support
17:30:11 <gema_> spyderdyne: and you determine what instances are doing work via ceilometer?
17:30:13 <spyderdyne> to this end we are setting instances on private subnets, having them discover all the other similar hosts on their private subnets, and having them run siege tests against each other using bombardment scale tests, pushing the result to our head node for reporting
17:30:25 <gema_> or on the load of the host
17:30:25 <spyderdyne> we dont use ceilometer at all for this
17:30:30 <gema_> ok
17:30:48 <spyderdyne> the head node has a web server that clients check in with every 10 minutes via cron
17:31:16 <spyderdyne> we write a script named .dosomething.sh and place it in the monitored directory
17:31:18 <gema_> ok
17:31:47 <gema_> ack, I understand how it works, thx
17:31:54 <spyderdyne> we then move it frmo .dosomething.sh to dosomething.sh and the clients check in over a 10 minute offset to see if there is something new for them to run.
17:31:58 <spyderdyne> if there is they run it
17:32:46 <spyderdyne> in our scale tests, mongodb has been unable to even keep up with more than a few thousand instances
17:33:01 <spyderdyne> so it hasnt been useful to us
17:33:01 <gema_> spyderdyne: and what are you using mongo for?
17:33:08 <spyderdyne> ceilodb
17:33:11 <gema_> ack
17:33:34 <gema_> there are scalability issues with ceilometer + mongo
17:33:45 <gema_> we've also been unable to use it with hundreds, not thousands
17:33:57 <gema_> use it as in not only storing data but also mine it later
17:34:30 <gema_> but that's an issue for another day, I guess
17:34:39 <gema_> spyderdyne: very interesting all you guys are doing
17:34:44 <spyderdyne> i was hooping someone would put more support behind the gnocci project, but in our case with Ceph for all our storage, it may cause other issues using that as well
17:35:14 <spyderdyne> the issue is all the writes needed
17:36:47 <spyderdyne> it makes snese to distribute them, and swift makes sense to use, but if you are using your object as a block sotre as well it just shifts the issue from adding another db platform to scale out in the control plane to making sure the storage can keep up
17:37:49 <gema_> spyderdyne: are results from this testing going to be available publicly?
17:38:03 <spyderdyne> i wanted to ask since it wasnt very well received, shoudl i remove all the mythos stuff from our team repo and just keep it in my github form now on?
17:38:22 <spyderdyne> the results fomr the intel testing will be made public
17:38:22 <gema_> spyderdyne: you can do whatever you prefer
17:38:32 <malini> spyderdyne: 'well received' where?
17:38:33 <gema_> spyderdyne: I don't think anyone is against your project
17:39:02 <gema_> spyderdyne: the only issue was making us present this when we knew nothing about it, without you there
17:39:32 <spyderdyne> the results from our internal testing will be made public once we prove that our hardware + architecture blows the doors off of or at least provides a high water mark compared to other platforms
17:39:46 <malini> spyderdyne: my only concern was presenting this as how everybody tests their installation of openstack, when you were the only one who knew what it is
17:39:59 <spyderdyne> :)
17:40:12 <malini> spyderdyne: it has nothing to do with how good mythos is - which I am sure is pretty cool
17:40:20 <gema_> absolutely
17:40:26 <gema_> spyderdyne: you scared us
17:40:37 <malini> & I cant even handle timezones :D
17:40:38 <spyderdyne> lol
17:41:06 <malini> spyderdyne: but if there is anything we can do to make mythos better, I would love to help :)
17:41:31 <gema_> yep, agreed, unfortunately my 8 hypervisors are already overloaded x)
17:41:43 <gema_> and it doesn't look like they'd make much of a difference
17:41:43 <malini> :D
17:42:22 <gema_> spyderdyne: I think the work you are doing is keeping this group alive
17:42:38 <gema_> spyderdyne: and hopefully we'll find a way to chip in
17:42:51 <malini> +1
17:43:09 <gema_> I will present what I have been working on next week
17:43:17 <gema_> explain it in detail like you've been doing
17:43:55 <spyderdyne> i would ask that any of you who are able to spin up an ubuntu vm with vt support to check out the code and give it a spin
17:43:58 <gema_> it's a jenkins plugin + a REST api for openstack on openstack functional testing
17:44:10 <spyderdyne> i could use the feedback and my docs definitely need improvement
17:44:28 <gema_> spyderdyne: vt support?
17:44:39 <spyderdyne> vanderpool
17:44:44 <spyderdyne> hardware virtualization
17:44:55 <gema_> yep, I can do that
17:45:07 <gema_> how many VMs do I need to install/use mythos?
17:45:29 <gema_> anyway, don't say anything
17:45:32 <gema_> I will try and ask questions
17:45:37 <gema_> you can improve documentation based on that
17:46:49 <gema_> spyderdyne: although I am not sure it'll happen before Xmas
17:47:06 <gema_> #action gema to try mythos on ubuntu with vt support
17:47:29 <malini> I will try to find an ubuntu with vt support
17:47:40 <malini> If I can get hold of one, I'll do too
17:47:41 <spyderdyne> 1 ubuntu 15.04 vm, 2048MB ram, 100GB hdd (until i get packer to work and build a smaller source image)
17:48:09 <gema_> spyderdyne: so you install it from an image?
17:48:24 <spyderdyne> as long as your openstack instnaces support hardware virt in libvirt (they are supposed to) then it shoudl work
17:48:39 <gema_> spyderdyne: and how do we check that it works?
17:48:48 <spyderdyne> its using virtualbox (boo) and VB needs VT to run 64 bit guests
17:49:28 <spyderdyne> http://askubuntu.com/questions/292217/how-to-enable-intel-vt-x
17:49:43 <gema_> #link http://askubuntu.com/questions/292217/how-to-enable-intel-vt-x
17:49:46 <spyderdyne> the script has a check built in and will fail gracefully if it isnt supported
17:49:58 <gema_> ok
17:50:03 <spyderdyne> its getting very friendly
17:50:05 <spyderdyne> :)
17:50:20 <spyderdyne> i am close to 1,000 commits now
17:50:27 <gema_> perfect
17:50:39 <gema_> you've been busy! :D
17:51:11 <gema_> alright, thanks so much for explaining it and we'll start helping with compatibility at least
17:51:16 <gema_> and documentation reviews
17:51:34 <gema_> we are 10 mins from end of meeting
17:51:40 <gema_> malini: anything from you?
17:52:24 <gema_> clee: ?
17:52:26 <gema_> hockeynut: ?
17:52:32 <hockeynut> I'm good
17:54:29 <gema_> alright
17:54:36 <gema_> spyderdyne: do you have anything else?
17:54:43 <malini> sorry - had to step away for a call
17:54:55 <gema_> malini: no worries
17:55:12 <gema_> alright, calling it a day then
17:55:22 <malini> thanks spyderdyne - this is really cool!
17:55:29 <gema_> #endmeeting