17:06:49 #startmeeting Rally 17:06:50 Meeting started Tue Oct 29 17:06:49 2013 UTC and is due to finish in 60 minutes. The chair is boris-42. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:06:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:06:53 The meeting name has been set to 'rally' 17:07:02 #topic what was done 17:07:02 o/ 17:07:30 Okay let's start one more time 17:07:38 :-) 17:07:39 There are few parts of Rally 17:07:53 Deployer, cloud verifier, and benchmark engine 17:08:07 So Deployer consists of 2 parts 17:08:22 Things that deploy, and things that provider servers/vms or so on 17:08:30 current state is next 17:08:36 we have 2 deployers 17:08:48 DummyEngine - just return endpoints of existing cloud 17:09:04 cool 17:09:07 DevstackEngine - use devstack to deploy multinode openstack cloud 17:09:21 And we have 3 providers 17:09:52 DummyProvider - returns list of servers (virtual servers) that already exist 17:10:08 VirshProvider - create VMs on specified server (remote) 17:10:26 LxcProvider - use any of this providers to create LXC containers inside 17:10:30 neat, boris-42 what does the devstack engine run on (where does it deploy?) 17:10:47 DevStack engine use one of provider to get servers 17:10:53 and then deploy on these servers 17:10:55 k 17:11:01 as any other engine 17:11:04 *future* 17:11:19 k, so it could use virshprovider to bootstrap devstack, which would then get your babycloud going 17:11:31 *just made up 'babycloud' 17:11:49 baby cloud going?) 17:12:03 DevStack based engine will deploy cloud on provided servers 17:12:12 or any other engine 17:12:20 So the next steps are here: 17:12:21 the devstack engine just install devstack on the remote host via ssh 17:12:28 k, bootstrapping the cloud using an existing provider 17:13:01 okay so next step are more providers & engines 17:13:03 seems more straightfoward than the tripleOOO thing (or at least what i know about it) 17:13:23 OpenStackProvider <- that will use existing cloud to provider VMs 17:13:41 ya +1 for that 17:13:50 LXCEngine - that will use LXCProvider and any other engine to make deployment rappid 17:13:57 so we will install only one compute node 17:14:05 and then just copy paste containers 17:14:11 cool 17:14:13 it works really fast=) 17:14:18 with zfs 17:14:22 def 17:14:40 is ZFS in modern kernels ? (can't remember) 17:15:11 it could be done!=) 17:15:19 it's not 17:15:28 we done on ubuntu=) 17:15:30 you can use user mode zfs though 17:15:31 agreed, it just might make it harder for others if the default is ZFS (and its not everywhere) 17:15:38 *just a thought* 17:16:15 Yeah but even without zfs it is still faster 17:16:18 k 17:16:19 then install every time 17:16:25 agreed 17:16:42 So next step is Verification of cloud 17:16:53 boris-42, sorry just a question but I haven't looked at the code 17:17:02 giulive sure 17:17:06 giulivo* 17:17:34 I assume this will make use of multinode deployments, am I right that is stuff which should be implemented at the level of the engine? 17:18:39 giulivo could you explain what you mean by "this"? 17:19:30 yes again please sorry as I haven't looked at the code, I was trying to figure which pieces are pluggable as I was interested in how the multinode deployment takes place 17:19:43 as currently devstack isn't an option for that 17:19:53 Okay I understand the question 17:20:08 DeployEngines and Providers are plugable 17:20:32 DeployEngine should be subclass of DeployFactory and implement 2 methods, deploy() and cleanup() 17:21:05 #action harlowja make AnvilDeployEngine 17:21:07 deploy() get on input config file (every engine has own configuration) 17:21:26 and deploy() should return endpoints of cloud 17:21:41 server provider should be able just to create_vms, and destroy_vms() 17:21:48 boris-42, oh great, I see it now, thanks! 17:22:03 giulivo so if you have own private cloud 17:22:17 giulive you will be able to make special provider and use rest part of Rally 17:22:27 without any problem 17:23:04 giulivo or if you are Anvil fanatic you can build even you special deploy engine for Rally and use existing providers and rest of rally 17:23:23 yeah this is the scenario I had in mind, thanks 17:23:41 okay so let's speak now about verification part 17:23:45 anvil fanatic, ha 17:23:51 #action CLoud Verification 17:23:59 ^ topic ? 17:24:08 #topic Cloud Verification 17:24:23 :) 17:24:30 Okay at this moment we use for that cases part of fuel-ostf-tests 17:24:43 it is the wrong approach because we have tempest -) 17:24:48 sdague ^ 17:25:03 i guess i question this, it depends on what is being verified 17:25:07 So I would like to switch to tempest, and this is open task 17:25:18 is it the correct thing to do? 17:25:19 harlowja that your cloud works properly=) 17:25:32 ok, so i guess tempest is good fit then? 17:25:39 perfect 17:25:42 k 17:25:43 except few things 17:25:55 I don't know how to run tempest with specific config 17:26:03 no --config XYZ ? 17:26:11 except put config to /etc/tempest.config 17:26:15 yes there is no --config 17:26:19 hmmm, odd 17:26:27 because we are running tempest through tester 17:26:31 testr* 17:26:44 boris-42, I can suggest https://github.com/pixelb/crudini for this 17:27:01 as the conf is just an ini formatted file 17:27:08 giulivo thats fine as long as u aren't running more than one tempest 17:27:34 idk if boris-42 is planning on, 1 ini file is gonna be problematic if thats the case 17:27:38 harlowja yeah we would like to be able to run multiple tempest for different clouds in the same time 17:27:49 boris-42: RIght now tempest uses TEMPEST_CONFIG_DIR and TEMPEST_CONFIG shell vars for dir/file 17:27:49 ya, so then 1 static config not gonna work out so well 17:28:07 dkranz nice 17:28:17 dkranz this seems slow the problem almost=) 17:28:34 slow? 17:28:41 solve* 17:28:44 k 17:29:01 dkranz do you know somebody that could help to migrate to tempest?) 17:29:38 boris-42: Not sure. Many people are just learning about it. 17:29:47 dkranz about tempest?) 17:29:57 boris-42: No, about rally 17:30:00 lol 17:30:14 dkranz okay it should be quite easy if you are tempest expert 17:30:30 dkarnz there is the method that is run with endpoints of cloud 17:30:58 Okay if somebody will be interested to help I will be really happy=) 17:31:13 okay let's move 17:31:20 #TOPIC benchmark engine 17:31:32 What was done 17:32:28 First of all how we should take a look at one of the simplest benchmark 17:32:32 run VM / stop VM 17:32:53 https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L24-L29 17:33:06 so how this stuff work 17:33:14 cool 17:33:49 You should create subclass of rally.benchamrk.base.Scenario 17:33:54 and add any method 17:34:02 like boot_and_delete_server 17:34:25 first 2 parameters cls and context are always required 17:34:43 other parameter are your 17:34:52 so how to call this method using rally? 17:34:58 magic 17:34:59 ! 17:35:04 https://wiki.openstack.org/wiki/Rally/BenchmarkScenarios#boot_and_delete_server 17:35:11 you should put such configuration 17:35:11 nice 17:35:14 at input 17:35:31 flavor_id, image_id will be putted as a params 17:35:34 to method 17:35:41 this method will be called 50 times 17:35:57 in 10 threads 17:36:06 50 times is total amount of calls* 17:36:08 all of those scenarios would form a common 'scenario' set that people can reuse right? 17:36:34 ?) 17:36:57 If you write any method in subclass of base.Scnearion you are getting new benchmark 17:37:05 so u can imagine rally providing a common set of scenarios (these config files) 17:37:05 (benchmark scneario) 17:37:10 *maybe it already does (not sure) 17:37:27 what means "common" ?) 17:38:00 like in rally there would be a scenario/ folder with lots of config files for the standard use cases to test 17:38:08 yes 17:38:10 k 17:38:12 ecxactly 17:38:17 cool 17:38:20 and then you are able to build a really pretty config 17:38:36 https://wiki.openstack.org/wiki/Rally/HowTo#Prepare_your_config_file 17:38:38 like this one 17:38:46 is that json? 17:38:48 it will make 2 benchmarks 17:38:50 yes json 17:38:53 hmmm 17:39:01 json doens't allow comments in the files 17:39:06 comments would seem nice to have 17:39:11 yes it doesn't allows=) 17:39:25 json is better then xml=)) 17:39:26 comments very useful to describe scenario and reasons why 17:39:28 yaml? 17:39:31 yaml allows comments 17:39:47 yaml is a good alternative ;) 17:40:18 ya, i'd prefer that, cause i think comments in these config files will be pretty important 17:40:30 hmmm 17:40:35 I don't use them=) 17:40:40 https://github.com/openstack/requirements/blob/master/global-requirements.txt#L79 17:40:41 but Ok this is good advice=) 17:40:54 harlowja makes sense=) 17:40:56 k 17:41:03 okay we should discuss it also 17:41:11 but in openstack-rally 17:41:13 =) 17:41:17 ok dokie 17:41:29 so you are able to call your methods with different parameters of benchmark engine 17:41:33 in the case of YAML we need to come up with validation of configs 17:41:53 akscram why just in the case of yaml, seems like in the case of any input :-/ 17:41:58 times, concurrent you already saw 17:42:02 there are two new params 17:42:12 tenants and user_per_tenant 17:42:25 so benchmark engine will create Real openstack tenants and users 17:42:31 JSON + JSON schema for configs sounds like the easiest approach 17:42:34 harlowja: for JSON it's already done with jsonschema 17:42:35 and use them to make all actions) 17:42:43 kylichuku +1 17:43:19 kylichuku akscram i'd disagree, u can use jsonschema with yaml, in fact i've created such a thing for cloudinit 17:43:37 fight fight =) 17:43:56 let me locate the cloudinit code 17:44:17 jsonschema just cares about basic types, not that the source is json 17:44:26 anyways 17:44:27 hmm 17:44:32 harlowja: right 17:44:34 jsonschema cares about a lot of thing=) 17:44:48 let's discuss important things, not about formats :) 17:44:50 =)) 17:44:56 okay important thing 17:45:00 it's just validate a dict by a dict 17:45:12 akscram in openstack-rally discuss this thing=) 17:45:31 What are our plans aournd current benchmark engine 17:45:56 We are able to run only "constant" load 17:46:11 so we are doing always fixed number of scenearios 17:46:28 and what we need is periodic functionallity 17:47:13 e.g. run this method every 2 minutes or every random(0,10) mintues 17:47:36 Another parameter is duration 17:47:56 so instead of times, set run this benchmark 2hrs 17:48:13 so at this moment we have open discussion about new format 17:48:28 https://docs.google.com/a/mirantis.com/document/d/1oodJqWLY06ZPUO9ar-Fz-IF4ujRIxEpSwVFftQ6Fjvs/edit 17:49:35 Goals are next, keep config simple as possible 17:49:41 and add new functionallity 17:49:47 boris-42: The whole format, dirver, config part is very similar to https://github.com/openstack/tempest/tree/master/tempest/stress 17:50:22 If there is to be a discussion about rally/tempest you should look at that. It is not a lot of code. 17:50:33 dkranz actually 17:50:54 dkranz you don't support noise and simultaneously running of N scnearios 17:51:05 noise that is crated by other scnearios 17:51:12 boris-42: I did not say it is feature-to-feature exactly the same. 17:51:36 boris-42: But the similarity is why the subject of tempest came up I think. 17:51:53 dkranz probably at this moment it is quite similiar 17:52:06 dkarnz but use cases are different 17:52:16 boris-42: And the question was whether one framework could support both stress and performance tests 17:52:45 boris-42: I am not actually claiming to know the answer to that question 17:52:55 both tempest and rally formats are way too simplified 17:52:56 dkranz actually I would like to clean situation 17:53:08 dkranz the goal of tempest is next 17:53:29 kylichuku: Yes, and they would both evolve 17:53:30 even for the same scenario different users in different tenants will have slightly different workflow 17:53:36 Very fast stress test main OpenStack functionality 17:53:47 to be able to work inside gate 17:53:59 and ensure that our patches don't hurt performance of cloud 17:54:25 boris-42: Sure 17:54:31 So if you are concentrating on tool that should work fast and in gate it is one set of scenarios and special engine 17:54:37 And Rally has another goal 17:54:45 boris-42, would you see any possibility to concatenate simple tests into a scenario (using the config file) and define threads and loops as a global parameter? 17:55:03 something that I would be happy to see is a separation of workflow definition and load specification 17:55:37 giulivo threads and loops shouldn't be in one place, at least at that format 17:55:44 giulive let me show some sample 17:56:20 boris-42, yeah I get that but I was thinking about concatenating simple tests into a more complex scenario from just the config file 17:56:30 so that one could "make up" a scenario from a set of simple tests 17:56:42 giulivo http://pastebin.com/j9BREqE9 17:56:51 in that case I'd just expect the tests to be executed in a sequence and threads and loops would be global 17:57:04 giulivo this is another thing 17:57:24 for example, workflow could be "Provision VM, keep it alive for 15 minutes, generate some work on it, snapshot, shutdown" with a separate specification of VM params (# of vCPUs, RAM and attached storage) multiplied by usage pattern (N tenants, M users per tenant, # of runs per hour) 17:57:24 giulivo we are going to make it possible to run multiple scenarios in the same time 17:57:55 all these 3 thing should be described in declarative fashion with underlying framework knowing how to combine these rules into workfload profile 17:58:35 kylichuku, that would be great yes, I think that explains better what I had in mind yes 17:59:08 kylichuku hmm and where you see problem now?) 17:59:42 kylichuku you are specifying all these parameters, like image, flavor and how much time to run, and concurrency?) 17:59:57 boris-42 both tempest and rally formats describe only 3rd set of params 17:59:57 dolphm: o/ 18:00:22 boris-42 while workflow is embedded into the test (written in python) and VM specification is outside of equation 18:00:29 o/ : ) 18:00:46 o/ 18:00:51 joesavak \o 18:00:56 we should end meeting=) 18:01:01 maybe :) 18:01:05 boris-42, not to bother but yeah mainly I think it'd be nice to move the workflow out of the test and more in the config 18:01:14 o/ 18:01:21 o/ 18:01:22 giulivo okay let's move to openstack-rally 18:01:27 \o 18:01:41 topol, they let you out of your meetings? 18:01:41 o/ 18:01:47 #endmeeting