17:06:49 <boris-42> #startmeeting Rally
17:06:50 <openstack> Meeting started Tue Oct 29 17:06:49 2013 UTC and is due to finish in 60 minutes.  The chair is boris-42. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:06:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:06:53 <openstack> The meeting name has been set to 'rally'
17:07:02 <boris-42> #topic what was done
17:07:02 <jaypipes> o/
17:07:30 <boris-42> Okay let's start one more time
17:07:38 <geekinutah> :-)
17:07:39 <boris-42> There are few parts of Rally
17:07:53 <boris-42> Deployer, cloud verifier, and benchmark engine
17:08:07 <boris-42> So Deployer consists of 2 parts
17:08:22 <boris-42> Things that deploy, and things that provider servers/vms or so on
17:08:30 <boris-42> current state is next
17:08:36 <boris-42> we have 2 deployers
17:08:48 <boris-42> DummyEngine - just return endpoints of existing cloud
17:09:04 <harlowja> cool
17:09:07 <boris-42> DevstackEngine - use devstack to deploy multinode openstack cloud
17:09:21 <boris-42> And we have 3 providers
17:09:52 <boris-42> DummyProvider - returns list of servers (virtual servers) that already exist
17:10:08 <boris-42> VirshProvider - create VMs on specified server (remote)
17:10:26 <boris-42> LxcProvider - use any of this providers to create LXC containers inside
17:10:30 <harlowja> neat, boris-42 what does the devstack engine run on (where does it deploy?)
17:10:47 <boris-42> DevStack engine use one of provider to get servers
17:10:53 <boris-42> and then deploy on these servers
17:10:55 <harlowja> k
17:11:01 <boris-42> as any other engine
17:11:04 <boris-42> *future*
17:11:19 <harlowja> k, so it could use virshprovider to bootstrap devstack, which would then get your babycloud going
17:11:31 <harlowja> *just made up 'babycloud'
17:11:49 <boris-42> baby cloud going?)
17:12:03 <boris-42> DevStack based engine will deploy cloud on provided servers
17:12:12 <boris-42> or any other engine
17:12:20 <boris-42> So the next steps are here:
17:12:21 <akscram> the devstack engine just install devstack on the remote host via ssh
17:12:28 <harlowja> k, bootstrapping the cloud using an existing provider
17:13:01 <boris-42> okay so next step are more providers & engines
17:13:03 <harlowja> seems more straightfoward than the tripleOOO thing (or at least what i know about it)
17:13:23 <boris-42> OpenStackProvider <- that will use existing cloud to provider VMs
17:13:41 <harlowja> ya +1 for that
17:13:50 <boris-42> LXCEngine - that will use LXCProvider and any other engine to make deployment rappid
17:13:57 <boris-42> so we will install only one compute node
17:14:05 <boris-42> and then just copy paste containers
17:14:11 <harlowja> cool
17:14:13 <boris-42> it works really fast=)
17:14:18 <boris-42> with zfs
17:14:22 <harlowja> def
17:14:40 <harlowja> is ZFS in modern kernels ? (can't remember)
17:15:11 <boris-42> it could be done!=)
17:15:19 <geekinutah> it's not
17:15:28 <boris-42> we done on ubuntu=)
17:15:30 <geekinutah> you can use user mode zfs though
17:15:31 <harlowja> agreed, it just might make it harder for others if the default is ZFS (and its not everywhere)
17:15:38 <harlowja> *just a thought*
17:16:15 <boris-42> Yeah but even without zfs it is still faster
17:16:18 <harlowja> k
17:16:19 <boris-42> then install every time
17:16:25 <harlowja> agreed
17:16:42 <boris-42> So next step is Verification of cloud
17:16:53 <giulivo> boris-42, sorry just a question but I haven't looked at the code
17:17:02 <boris-42> giulive sure
17:17:06 <boris-42> giulivo*
17:17:34 <giulivo> I assume this will make use of multinode deployments, am I right that is stuff which should be implemented at the level of the engine?
17:18:39 <boris-42> giulivo could you explain what you mean by "this"?
17:19:30 <giulivo> yes again please sorry as I haven't looked at the code, I was trying to figure which pieces are pluggable as I was interested in how the multinode deployment takes place
17:19:43 <giulivo> as currently devstack isn't an option for that
17:19:53 <boris-42> Okay I understand the question
17:20:08 <boris-42> DeployEngines and Providers are plugable
17:20:32 <boris-42> DeployEngine should be subclass of DeployFactory and implement 2 methods, deploy() and cleanup()
17:21:05 <harlowja> #action harlowja make AnvilDeployEngine
17:21:07 <boris-42> deploy() get on input config file (every engine has own configuration)
17:21:26 <boris-42> and deploy() should return endpoints of cloud
17:21:41 <boris-42> server provider should be able just to create_vms, and destroy_vms()
17:21:48 <giulivo> boris-42, oh great, I see it now, thanks!
17:22:03 <boris-42> giulivo so if you have own private cloud
17:22:17 <boris-42> giulive you will be able to make special provider and use rest part of Rally
17:22:27 <boris-42> without any problem
17:23:04 <boris-42> giulivo or if you are Anvil fanatic you can build even you special deploy engine for Rally and use existing providers and rest of rally
17:23:23 <giulivo> yeah this is the scenario I had in mind, thanks
17:23:41 <boris-42> okay so let's speak now about verification part
17:23:45 <harlowja> anvil fanatic, ha
17:23:51 <boris-42> #action CLoud Verification
17:23:59 <harlowja> ^ topic ?
17:24:08 <boris-42> #topic Cloud Verification
17:24:23 <harlowja> :)
17:24:30 <boris-42> Okay at this moment we use for that cases part of fuel-ostf-tests
17:24:43 <boris-42> it is the wrong approach because we have tempest -)
17:24:48 <boris-42> sdague ^
17:25:03 <harlowja> i guess i question this, it depends on what is being verified
17:25:07 <boris-42> So I would like to switch to tempest, and this is open task
17:25:18 <harlowja> is it the correct thing to do?
17:25:19 <boris-42> harlowja that your cloud works properly=)
17:25:32 <harlowja> ok, so i guess tempest is good fit then?
17:25:39 <boris-42> perfect
17:25:42 <harlowja> k
17:25:43 <boris-42> except few things
17:25:55 <boris-42> I don't know how to run tempest with specific config
17:26:03 <harlowja> no --config XYZ ?
17:26:11 <boris-42> except put config to /etc/tempest.config
17:26:15 <boris-42> yes there is no --config
17:26:19 <harlowja> hmmm, odd
17:26:27 <boris-42> because we are running tempest through tester
17:26:31 <boris-42> testr*
17:26:44 <giulivo> boris-42, I can suggest https://github.com/pixelb/crudini for this
17:27:01 <giulivo> as the conf is just an ini formatted file
17:27:08 <harlowja> giulivo thats fine as long as u aren't running more than one tempest
17:27:34 <harlowja> idk if boris-42 is planning on, 1 ini file is gonna be problematic if thats the case
17:27:38 <boris-42> harlowja yeah we would like to be able to run multiple tempest for different clouds in the same time
17:27:49 <dkranz> boris-42: RIght now tempest uses TEMPEST_CONFIG_DIR and TEMPEST_CONFIG shell vars for dir/file
17:27:49 <harlowja> ya, so then 1 static config not gonna work out so well
17:28:07 <boris-42> dkranz nice
17:28:17 <boris-42> dkranz this seems slow the problem almost=)
17:28:34 <harlowja> slow?
17:28:41 <boris-42> solve*
17:28:44 <harlowja> k
17:29:01 <boris-42> dkranz do you know somebody that could help to migrate to tempest?)
17:29:38 <dkranz> boris-42: Not sure. Many people are just learning about it.
17:29:47 <boris-42> dkranz about tempest?)
17:29:57 <dkranz> boris-42: No, about rally
17:30:00 <boris-42> lol
17:30:14 <boris-42> dkranz okay it should be quite easy if you are tempest expert
17:30:30 <boris-42> dkarnz there is the method that is run with endpoints of cloud
17:30:58 <boris-42> Okay if somebody will be interested to help I will be really happy=)
17:31:13 <boris-42> okay let's move
17:31:20 <boris-42> #TOPIC benchmark engine
17:31:32 <boris-42> What was done
17:32:28 <boris-42> First of all how we should take a look at one of the simplest benchmark
17:32:32 <boris-42> run VM / stop VM
17:32:53 <boris-42> https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L24-L29
17:33:06 <boris-42> so how this stuff work
17:33:14 <harlowja> cool
17:33:49 <boris-42> You should create subclass of rally.benchamrk.base.Scenario
17:33:54 <boris-42> and add any method
17:34:02 <boris-42> like boot_and_delete_server
17:34:25 <boris-42> first 2 parameters cls and context are always required
17:34:43 <boris-42> other parameter are your
17:34:52 <boris-42> so how to call this method using rally?
17:34:58 <harlowja> magic
17:34:59 <harlowja> !
17:35:04 <boris-42> https://wiki.openstack.org/wiki/Rally/BenchmarkScenarios#boot_and_delete_server
17:35:11 <boris-42> you should put such configuration
17:35:11 <harlowja> nice
17:35:14 <boris-42> at input
17:35:31 <boris-42> flavor_id, image_id will be putted as a params
17:35:34 <boris-42> to method
17:35:41 <boris-42> this method will be called 50 times
17:35:57 <boris-42> in 10 threads
17:36:06 <boris-42> 50 times is total amount of calls*
17:36:08 <harlowja> all of those scenarios would form a common 'scenario' set that people can reuse right?
17:36:34 <boris-42> ?)
17:36:57 <boris-42> If you write any method in subclass of base.Scnearion you are getting new benchmark
17:37:05 <harlowja> so u can imagine rally providing a common set of scenarios (these config files)
17:37:05 <boris-42> (benchmark scneario)
17:37:10 <harlowja> *maybe it already does (not sure)
17:37:27 <boris-42> what means "common" ?)
17:38:00 <harlowja> like in rally there would be a scenario/ folder with lots of config files for the standard use cases to test
17:38:08 <boris-42> yes
17:38:10 <harlowja> k
17:38:12 <boris-42> ecxactly
17:38:17 <harlowja> cool
17:38:20 <boris-42> and then you are able to build a really pretty config
17:38:36 <boris-42> https://wiki.openstack.org/wiki/Rally/HowTo#Prepare_your_config_file
17:38:38 <boris-42> like this one
17:38:46 <harlowja> is that json?
17:38:48 <boris-42> it will make 2 benchmarks
17:38:50 <boris-42> yes json
17:38:53 <harlowja> hmmm
17:39:01 <harlowja> json doens't allow comments in the files
17:39:06 <harlowja> comments would seem nice to have
17:39:11 <boris-42> yes it doesn't allows=)
17:39:25 <boris-42> json is better then xml=))
17:39:26 <harlowja> comments very useful to describe scenario and reasons why
17:39:28 <harlowja> yaml?
17:39:31 <harlowja> yaml allows comments
17:39:47 <akscram> yaml is a good alternative ;)
17:40:18 <harlowja> ya, i'd prefer that, cause i think comments in these config files will be pretty important
17:40:30 <boris-42> hmmm
17:40:35 <boris-42> I don't use them=)
17:40:40 <harlowja> https://github.com/openstack/requirements/blob/master/global-requirements.txt#L79
17:40:41 <boris-42> but Ok this is good advice=)
17:40:54 <boris-42> harlowja makes sense=)
17:40:56 <harlowja> k
17:41:03 <boris-42> okay we should discuss it also
17:41:11 <boris-42> but in openstack-rally
17:41:13 <boris-42> =)
17:41:17 <harlowja> ok dokie
17:41:29 <boris-42> so you are able to call your methods with different parameters of benchmark engine
17:41:33 <akscram> in the case of YAML we need to come up with validation of configs
17:41:53 <harlowja> akscram why just in the case of yaml, seems like in the case of any input :-/
17:41:58 <boris-42> times, concurrent you already saw
17:42:02 <boris-42> there are two new params
17:42:12 <boris-42> tenants and user_per_tenant
17:42:25 <boris-42> so benchmark engine will create Real openstack tenants and users
17:42:31 <kylichuku> JSON + JSON schema for configs sounds like the easiest approach
17:42:34 <akscram> harlowja: for JSON it's already done with jsonschema
17:42:35 <boris-42> and use them to make all actions)
17:42:43 <boris-42> kylichuku +1
17:43:19 <harlowja> kylichuku akscram i'd disagree, u can use jsonschema with yaml, in fact i've created such a thing for cloudinit
17:43:37 <boris-42> fight fight =)
17:43:56 <harlowja> let me locate the cloudinit code
17:44:17 <harlowja> jsonschema just cares about basic types, not that the source is json
17:44:26 <harlowja> anyways
17:44:27 <boris-42> hmm
17:44:32 <akscram> harlowja: right
17:44:34 <boris-42> jsonschema cares about a lot of thing=)
17:44:48 <kylichuku> let's discuss important things, not about formats :)
17:44:50 <boris-42> =))
17:44:56 <boris-42> okay important thing
17:45:00 <akscram> it's just validate a dict by a dict
17:45:12 <boris-42> akscram in openstack-rally discuss this thing=)
17:45:31 <boris-42> What are our plans aournd current benchmark engine
17:45:56 <boris-42> We are able to run only "constant" load
17:46:11 <boris-42> so we are doing always fixed number of scenearios
17:46:28 <boris-42> and what we need is periodic functionallity
17:47:13 <boris-42> e.g. run this method every 2 minutes or every random(0,10) mintues
17:47:36 <boris-42> Another parameter is duration
17:47:56 <boris-42> so instead of times, set run this benchmark 2hrs
17:48:13 <boris-42> so at this moment we have open discussion about new format
17:48:28 <boris-42> https://docs.google.com/a/mirantis.com/document/d/1oodJqWLY06ZPUO9ar-Fz-IF4ujRIxEpSwVFftQ6Fjvs/edit
17:49:35 <boris-42> Goals are next, keep config simple as possible
17:49:41 <boris-42> and add new functionallity
17:49:47 <dkranz> boris-42: The whole format, dirver, config part is very similar to https://github.com/openstack/tempest/tree/master/tempest/stress
17:50:22 <dkranz> If there is to be a discussion about rally/tempest you should look at that. It is not a lot of code.
17:50:33 <boris-42> dkranz actually
17:50:54 <boris-42> dkranz you don't support noise and simultaneously running of N scnearios
17:51:05 <boris-42> noise that is crated by other scnearios
17:51:12 <dkranz> boris-42: I did not say it is feature-to-feature exactly the same.
17:51:36 <dkranz> boris-42: But the similarity is why the subject of tempest came up I think.
17:51:53 <boris-42> dkranz probably at this moment it is quite similiar
17:52:06 <boris-42> dkarnz but use cases are different
17:52:16 <dkranz> boris-42: And the question was whether one framework could support both stress and performance tests
17:52:45 <dkranz> boris-42: I am not actually claiming to know the answer to that question
17:52:55 <kylichuku> both tempest and rally formats are way too simplified
17:52:56 <boris-42> dkranz actually I would like to clean situation
17:53:08 <boris-42> dkranz the goal of tempest is next
17:53:29 <dkranz> kylichuku: Yes, and they would both evolve
17:53:30 <kylichuku> even for the same scenario different users in different tenants will have slightly different workflow
17:53:36 <boris-42> Very fast stress test main OpenStack  functionality
17:53:47 <boris-42> to be able to work inside gate
17:53:59 <boris-42> and ensure that our patches don't hurt performance of cloud
17:54:25 <dkranz> boris-42: Sure
17:54:31 <boris-42> So if you are concentrating on tool that should work fast and in gate it is one set of scenarios and special engine
17:54:37 <boris-42> And Rally has another goal
17:54:45 <giulivo> boris-42, would you see any possibility to concatenate simple tests into a scenario (using the config file) and define threads and loops as a global parameter?
17:55:03 <kylichuku> something that I would be happy to see is a separation of workflow definition and load specification
17:55:37 <boris-42> giulivo threads and loops shouldn't be in one place, at least at that format
17:55:44 <boris-42> giulive let me show some sample
17:56:20 <giulivo> boris-42, yeah I get that but I was thinking about concatenating simple tests into a more complex scenario from just the config file
17:56:30 <giulivo> so that one could "make up" a scenario from a set of simple tests
17:56:42 <boris-42> giulivo http://pastebin.com/j9BREqE9
17:56:51 <giulivo> in that case I'd just expect the tests to be executed in a sequence and threads and loops would be global
17:57:04 <boris-42> giulivo this is another thing
17:57:24 <kylichuku> for example, workflow could be "Provision VM, keep it alive for 15 minutes, generate some work on it, snapshot, shutdown" with a separate specification of VM params (# of vCPUs, RAM and attached storage) multiplied by usage pattern (N tenants, M users per tenant, # of runs per hour)
17:57:24 <boris-42> giulivo we are going to make it possible to run multiple scenarios in the same time
17:57:55 <kylichuku> all these 3 thing should be described in declarative fashion with underlying framework knowing how to combine these rules into workfload profile
17:58:35 <giulivo> kylichuku, that would be great yes, I think that explains better what I had in mind yes
17:59:08 <boris-42> kylichuku hmm and where you see  problem now?)
17:59:42 <boris-42> kylichuku you are specifying all these parameters, like image, flavor and how much time to run, and concurrency?)
17:59:57 <kylichuku> boris-42 both tempest and rally formats describe only 3rd set of params
17:59:57 <stevemar> dolphm: o/
18:00:22 <kylichuku> boris-42 while workflow is embedded into the test (written in python) and VM specification is outside of equation
18:00:29 <joesavak> o/ : )
18:00:46 <dolphm> o/
18:00:51 <stevemar> joesavak \o
18:00:56 <boris-42> we should end meeting=)
18:01:01 <stevemar> maybe :)
18:01:05 <giulivo> boris-42, not to bother but yeah mainly I think it'd be nice to move the workflow out of the test and more in the config
18:01:14 <topol> o/
18:01:21 <fabiog> o/
18:01:22 <boris-42> giulivo okay let's move to openstack-rally
18:01:27 <gyee> \o
18:01:41 <stevemar> topol, they let you out of your meetings?
18:01:41 <dstanek> o/
18:01:47 <boris-42> #endmeeting