17:07:18 #startmeeting rally 17:07:19 Meeting started Tue Apr 1 17:07:18 2014 UTC and is due to finish in 60 minutes. The chair is boris-42. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:07:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:07:22 The meeting name has been set to 'rally' 17:07:25 boris-42: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 17:07:36 hughsaunders marcoemorais ping 17:08:30 hey 17:09:05 boris-42: hey 17:10:18 marcoemorais hughsaunders let's wait a bit of others=) 17:11:37 ok :-) i thought i'd missed meeting, but summertime has moved it an hour later :-) 17:14:06 hughsaunders hehe=) 17:14:14 eyerediskin ping 17:14:19 * mwagner_zzz in 17:15:24 boris-42: pong 17:17:22 mwagner_ marcoemorais could you share your emails 17:17:34 mwagner_ marcoemorais I would like to share google doc 17:18:41 boris-42: mmorais@yahoo-inc.com 17:20:28 #topic Rally future road map 17:20:47 mwagner_ marcoemorais eyerediskin mwagner_zzz hughsaunders okay let's start 17:20:56 Today we have actually only one topic but it's quite a big 17:21:00 https://docs.google.com/document/d/1cAgyueiQ__tZ_MG5oOxe4q49tyPQcm-Y1469lmeERq8/edit?usp=sharing 17:21:02 Rally future road map 17:21:54 So High level things that I would like to say are 17:22:29 1) Better cover requirements in benchmarking of other projects (e.g. Marconi and MagnetoDB) that would like to generate 10k-100k requests per sec 17:22:40 ^ It will require a lot of work on Runners 17:23:36 2) Better cover requirements of neutron where we need actually to buid specifc environment before benchmarking (benchmark-context) 17:24:19 3) Better cover requirements of PaaS (murano, solum, sahara) where we actually are interested not only in measruing API stuff 17:24:24 but actually how it works at all 17:24:53 Start working on Operators stuff 17:25:14 HealthChecks, Better Verification, RestAPI, Better Reporst, Historical data and so on 17:25:26 Finish profiling=) 17:25:46 And Finially have gates that tests everything=) 17:26:16 so that's all=) 17:26:28 boris-42: done 17:26:35 haha 17:26:36 +2 17:26:44 PROFITT! 17:27:33 boris-42, it would be to be able to quantify performance of thing like #2, in other words, measuring how long it takes to build out the neutron network 17:28:16 mwagner_ so actually pinguinRaider will try to start work on measuring time in context 17:28:31 mwagner_ during GSoC 2014 (it's part of his task) 17:28:59 making stuff run fast in a guest is easy, getting the env setup in a performant manner is hard 17:29:47 mwagner_ yep yep 17:30:02 mwagner_ this will be a hard work on our context classes 17:30:16 mwagner_ like we done with "users" context to create users & tenants in parallle 17:30:25 mwagner_ but seems like it could be imporved more 17:31:25 mwagner_ harlowja hughsaunders marcoemorais so we shoud start working on that big document 17:31:40 https://docs.google.com/document/d/1cAgyueiQ__tZ_MG5oOxe4q49tyPQcm-Y1469lmeERq8/edit?usp=sharing 17:32:20 boris-42 whats the plan for selectively applying context? add decorator for required contexts, then supply args in config? 17:32:30 harlowja yep 17:32:33 hughsaunders yep 17:32:44 hughsaunders actually it will be a bit more magic=) 17:33:01 hughsaunders e.g. if there is nothing in "context" config it will put {} 17:33:07 hughsaunders and try to validate {} 17:33:19 hughsaunders if it pass like in case of key pair & sec group 17:33:34 hughsaunders then you don't need to specify them in conf everytime 17:37:16 so does anybody want to discuss anything about RoadMap?) 17:37:22 boris-42: what abt the distributed runner? are we going to build on top of what python offers in multiprocess 17:37:39 boris-42: or are we going to use a task queue like celery or ...? 17:37:55 marcoemorais query 17:38:05 marcoemorais by distributed I mean a lot of agents 17:38:22 marcoemorais that can produce load from different host 17:38:41 marcoemorais cause it's really hard to produce enough big load from one node 17:39:09 boris-42: yes, exactly 17:39:28 marcoemorais so we should think how to make it possible with keeping simple way to specify load (e.g. we have now) 17:39:52 marcoemorais probably just using multiply current scenario runners from different host without any change=) 17:40:09 marcoemorais but there is a problem (with collecting big amount of data) and storing it 17:40:28 marcoemorais as well part with deploying runners 17:40:38 marcoemorais I think eyerediskin may help us with this part ^ 17:40:43 boris-42: I was wondering whether the rally-as-service part would play a role here 17:41:03 marcoemorais rally-as-a-service is different thing=) 17:41:20 boris-42: if we have a rally service api, then we can use rpc interface to distribute the load generation 17:41:45 marcoemorais I don't think that it is a good idea 17:42:02 marcoemorais cause I would like to keep the almost same functionality via CLI and aaS 17:42:16 marcoemorais so we should make runners more separated from project 17:42:19 boris-42: another way is the task queue, but for that we need to have rally daemon or something on each client to consume the tasks 17:42:41 marcoemorais yep ^ that I like 17:43:01 marcoemorais we can reuse serverproviders 17:43:14 marcoemorais and instead of deploying openstack run rally agents 17:43:35 marcoemorais that have simple rpc/http api to accept context and produce some load 17:44:03 marcoemorais i wouldn't touch celery :-P 17:44:08 use taskflow ;) 17:44:23 harlowja yep seems like a good place for taskflow 17:45:05 def 17:45:35 boris-42 harlowja ok we will use taskflow 17:45:45 marcoemorais so are you interested in this topic?) 17:46:04 harlowja using tasfklow ^ better result collector & better result strong 17:46:04 storing* 17:46:04 boris-42: yes I would like to work on this, we can make use of it right away 17:46:06 marcoemorais ^ 17:46:08 boris-42 another suggestion, rate-limting load creation 17:46:34 without some kind of rate-limiting its gonna be hard to control debugging, verifying... 17:46:35 & while you are still at task queues, remember marconi ;) 17:46:43 macaroni 17:46:44 lol 17:46:58 maraconi 17:47:06 oh 17:47:11 macaroni 17:47:20 marconi 17:47:25 http://upload.wikimedia.org/wikipedia/commons/e/ea/Macaroni_closeup.jpg 17:47:42 marcoemorais(oni) 17:48:03 harlowja marcoroni?) 17:48:07 lol 17:48:34 also synchronization would be key, ability to make all the agents fire at once 17:48:35 harlowja boris-42 following up on harlowja idea, what do you think about being able to express the load to put on the cloud in terms of RPS to keystone 17:49:06 marcoemorais +1 17:49:13 marcoemorais instead of period?) 17:49:19 so some scheduling , at 13:00 start 50 agents doing X 17:49:19 marcoemorais hopefully not just to keystone right? 17:49:56 mwagner_ i think its more than just start 50 agents doing X, its about controlling how much traffic they produce also 17:49:57 harlowja: yes, not just keystone — I mistakenly refer to keystone as the uber api 17:50:02 k 17:50:09 marcoemorais hmm 17:50:16 marcoemorais why not just use current runners 17:50:22 marcoemorais and first request is init 17:50:28 marcoemorais to send context objects 17:50:33 marcoemorais second is multicast 17:50:38 marcoemorais fire! 17:50:43 marcoemorais burn openstack=) 17:51:00 when pushing the boundaries of the system it will be better to have some rate control though right :-P 17:51:01 less burn, lol 17:51:08 harlowja, agreed on the amount of activity, but there are cases when you want them synchronized 17:51:23 mwagner_ ah, sure 17:51:35 mwagner_ I am not sure that we need so precise syncronization 17:51:45 meeting is going to end, but i have a question to all: what do you think about not using google docs, and using wiki instead? 17:51:50 mwagner_: if you require synchronization, why wouldn't you just code that as part of your scenario? 17:51:51 mwagner_ especially when we will run for few hours benchmark 17:52:41 mwagner_ it doesn't matter if some runner will start not exactly in the same second 17:52:54 mwagner_ or I am not right?) 17:52:59 eyerediskin, Wiki doesn't provide commenting facilities... 17:53:05 eyerediskin, That's of huge importance 17:53:19 eyerediskin, Why do you dislike gdocs? 17:53:25 mwagner_: in other words, if you want to test load of operation a1 and a2 in parallel, then you write a scenario which forks and synchronizes to call a1 and a2 in parallel? 17:53:39 eyerediskin I dislike that idea=0 17:53:49 boris-42, depends on the tests, if there are times where you want to send X requests at the same time 17:54:27 mwagner_ so let's write in google doc 17:54:28 marcoemorais, boris-42 also thinking of the ability to schedule, every Sat at 15:00 kick of this set of tests 17:54:33 mwagner_ idead and fantasies=) 17:54:39 marcoemorais, Btw I think we also shouldn't require too much complicated coding in scenarios 17:54:42 mwagner_ it's more operator stuff 17:54:50 marcoemorais, They should be as simple to write as possible 17:54:50 mwagner_ it shouldn't be inside benchmark engine 17:55:11 mwagner_ it should be on top of Rally 17:55:22 mwagner_ it's health performance check part 17:55:27 boris-42, my fantasies should *not* be in a google doc ;) 17:55:33 haha 17:55:40 mwagner_ ^_^ 17:55:45 every week = rally api client + cron ? 17:56:10 hughsaunders it will be inside Rally aaS 17:56:23 hughsaunders support of periodic task 17:56:58 btw https://review.openstack.org/84394 17:57:02 hughsaunders e.g. operator creates "task config" & specify when to run it 17:57:05 so we have rally-install job 17:57:18 eyerediskin ^ nice nice! 17:57:42 mwagner_ so okay write your ideas lol=) 17:57:47 marcoemorais as well 17:58:11 mwagner_ marcoemorais so we will discuss how to cover everybody's usecases=) 17:58:20 eyerediskin: cool 17:58:46 okay guys future discussion in Rally chat room 17:58:52 =) 17:59:03 #endmeeting