17:03:25 #startmeeting rally 17:03:26 Meeting started Tue Apr 15 17:03:25 2014 UTC and is due to finish in 60 minutes. The chair is boris-42. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:03:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:03:29 The meeting name has been set to 'rally' 17:03:48 hughsaunders msdubov marcoemorais olkonami kun_huang meeeitng time 17:03:53 o/ 17:03:59 boris-42, hi 17:04:01 hi 17:04:33 boris-42: hi 17:08:32 hi 17:08:56 hey everybody 17:08:59 okay let's start 17:09:13 marcoemorais first off all gratz 17:09:20 marcoemorais with splitting scenarios=) 17:09:29 boris-42: cool, thx for the reviews 17:09:34 #topic tempest integration 17:09:53 olkonami and Andrey were hard working 17:09:57 to make this possible 17:09:58 =) 17:10:09 thanks to both :) 17:10:30 now we should review everything) 17:10:40 first patch has just gone in 17:10:52 hughsaunders oh you already reviewed=) 17:10:58 hughsaunders btw it pass all tests? 17:11:01 yeah 17:11:37 hughsaunders nice going to test 17:11:41 olkonami well done!=) 17:12:02 so other 2 patches add support of benchmarking with any of tests from tempest 17:12:40 marcoemorais msdubov you could also help with reviewes 17:12:45 to speed up process 17:12:48 boris-42: ack 17:12:51 boris-42 Sure also started doing that 17:13:07 boris-42 I had one concern whether the Tempest context should be a hidden one 17:13:14 msdubov oh yes 17:13:21 msdubov agree 17:14:00 so after we merge these 2 patches we will be able to say that we finished tempest configuration 17:14:09 to mailing list woohoo=) 17:14:31 olkonami do you have anything to say? 17:15:03 just thanks for reviews and test those patch =) 17:15:18 olkonami okay nice=) 17:15:30 olkonami I think I will find you more interesting task=) 17:15:37 olkonami then integration with tempest=) 17:15:47 okay let's move to next topic 17:16:03 #topic Rally as a Service 17:16:18 today we started discussion about what we should do 17:16:36 #link https://blueprints.launchpad.net/rally/+spec/api-base 17:16:53 https://docs.google.com/a/pavlovic.ru/document/d/1lzo-UTI0Rg767WEzl42XdUYHBW7ZlpkARieJ_5E8Z_g/edit#heading=h.sjbn72b73o3c so we started this document 17:17:04 and let me copy paste 17:17:07 new arch diagram 17:18:01 one sec 17:18:42 okay add diagram that we discussed today 17:18:44 to that document 17:19:01 so key things that were discussed are next 17:19:13 1: we will have 4 different projects 17:19:23 1. rally (with web ui togther) 17:19:47 2. rally horizon pulgin (temporary project until it will be merged in horizon) 17:19:58 3. rally python client 17:20:16 4. rally-web-lib (common code related to horizon plugin and web ui) 17:21:03 2: we should put all logic from our current CLI to Rally Manager orchestrator (current orchestrator API) 17:21:31 3: Orchestrator API should be OOP (and probably splited) 17:21:56 and that's actually all that we discussed=) 17:22:14 if somebody have any ideas or would like to take a part in discussion you are welcome=) 17:22:59 hughsaunders msdubov marcoemorais olkonami aswadrangnekar ^ 17:23:01 any questions?) 17:23:41 boris-42 perhaps just add that one of the main ideas was to move all DB calls from the CLI to the Orchestrator 17:23:42 boris-42: nothing as of now 17:24:32 msdubov yep we should add a lot of info to that document 17:24:41 boris-42 agree 17:24:47 I think 1.5-2 weeks for discussion 17:24:53 and we will start implementation 17:25:40 boris-42: inside of rally-as-a-service, what is this manager rpc api? 17:26:14 marcoemorais so it's stuff that listen "rpc" and perform all operation by calling orchestrator 17:26:21 marcoemorais let me explain on samle 17:26:32 you would like to run "task start" 17:26:41 it can work for a quite long period 17:27:11 so API (controller) making async call to run task 17:27:15 to manager 17:27:23 and manager works for some amount of time 17:28:16 so controllers are going to be quite dummy 17:28:19 boris-42: ok yep, I get it 17:28:23 just accept request 17:28:33 and call sync/async manager 17:28:37 and return result 17:28:54 for such sync/async stuff we should have something=) and that something is manager=) 17:29:19 boris-42: i guess it is same logic as used in nova / other projects ?? 17:29:31 aswadrangnekar actually yes 17:29:44 aswadrangnekar at least quite similar 17:30:31 hmmm, boris-42 sounds similar to https://blueprints.launchpad.net/taskflow/+spec/generic-flow-conductor 17:31:04 harlowja_ why not to use just oslo.messaging crap?) 17:31:14 2.7 17:31:17 no 3.3 17:31:19 3.4 17:31:26 but maybe don't matter for u 17:31:32 harlowja_ but in rally there is no 3.3 and 3.4 17:31:36 :-P 17:31:39 harlowja_ cause glance client is broken 17:31:40 =) 17:31:43 lol 17:31:48 and we have it in requirements=0 17:32:10 k, well oslo.messaging doesn't provide u a conductor/orchestrator though 17:32:25 *seems like thats what u are talking about here 17:32:31 hmm 17:32:41 okay this is long discussion probably not for meetings?) 17:32:45 k 17:32:50 we just started that document 17:32:55 ideas are welcome imho 17:33:08 boris-42 aswadrangnekar harlowja_: in any case request to benchmark will return polling url, which client will use to get result of benchmark http://docs.openstack.org/api/openstack-compute/2/content/ChangesSince.html 17:33:42 marcoemorais bueee 17:33:48 marcoemorais pooling urls are 90s 17:34:07 marcoemorais at least in rally web ui we should try to make it on web sockets 17:35:31 marcoemorais so seems like there will be huge discussion about this stuff 17:35:49 let's move on other interesting stuff 17:35:57 boris-42 is from the future 17:36:13 hughsaunders lol=) 17:36:22 hughsaunders, cyborg ? 17:36:30 possibly 17:36:41 huats, hopefully not a Terminator 17:36:43 t-800 lol 17:38:03 don't write a bad code http://4.bp.blogspot.com/_KlPx_bSlcuc/TVCpzzz17zI/AAAAAAAAAVk/heRJXE8UCRw/s1600/t-1000.jpg !! 17:38:39 :) 17:39:05 #topic 100k load 17:39:20 marcoemorais did you start thinking about it and current periodic runner? 17:40:05 boris-42: yes, starting to use taskflow, need to meet with harlowja_ 17:40:39 marcoemorais but what about just to improve current one? 17:41:02 marcoemorais I mean I am interested in keeping pluggabillity 17:41:16 marcoemorais but be able to run the same plugins form different host 17:41:34 marcoemorais (that will be as well pluging) 17:41:50 marcoemorais e.g. GrandRunner that run other runners lol 17:42:11 boris-42: distributed benchmark will require agent running on all hosts used to generate load 17:42:19 marcoemorais yep 17:42:49 marcoemorais so I'm thinking about reusing our rally/deploy/serverproviders 17:42:59 marcoemorais to this case as well 17:43:10 could ssh to other nodes and only run load generator for the duration of the benchmark 17:43:18 so you don't need a permanent agent 17:44:17 hughsaunders it will be quite hard 17:44:20 hughsaunders: kind of incomplete to say that, rally has to be deployed on the other hosts 17:44:45 hughsaunders to keep 1k open ssh connections 17:44:51 hughsaunders from one host 17:44:57 hughsaunders: also messy for tracking 17:45:17 so agents are ok I think here.. 17:45:28 ok 17:45:38 communication via AMQP? 17:45:41 hughsaunders yep 17:45:51 hughsaunders there will be a lot of tasks 17:46:06 1. storing all this data 17:46:13 2. collecting it 17:46:23 3. sending context to all runners 17:46:46 as well it will be nice to have "abort" command 17:47:05 and we should as well take care about does actually all runners works from all nodes?) 17:47:19 hughsaunders ^ so I think agents will be not so simple=) 17:47:43 boris-42: that does sound non-trivial 17:47:44 are we targetting specific OS for the targets ? soltion needs to take that into account 17:47:56 boris-42 hughsaunders agent should be able to sanity check & update host it is running on 17:48:06 marcoemorais nope 17:48:18 marcoemorais at least it should work on centos/ubunt 17:48:52 could supply agent in a venv? 17:49:37 marcoemorais I think there should be some design doc 17:49:41 marcoemorais before starting this work 17:50:06 okay next topic * 17:50:17 #topic pre-processing of input args in task 17:50:25 marcoemorais what is the status? 17:50:44 boris-42: working on your comments, will have more wip today 17:50:53 marcoemorais ok nice 17:50:55 boris-42: we agree to skip validation of flavor for image 17:51:06 marcoemorais in processing step only 17:51:28 marcoemorais but we should keep it in validation step 17:51:44 boris-42: yes, so if user does use name instead of id, they might have selected an invalid flavor 17:52:00 boris-42: but for now we are accepting that 17:52:07 marcoemorais nope 17:52:08 =) 17:52:21 marcoemorais as we were discussing at this moment we should process 17:52:22 it 17:52:28 ad validation step (inside validation) 17:52:32 at* 17:52:38 and one more time process before running 17:52:46 at least that was my thoguths=) 17:53:43 in future I we will think how to align it with flavor/image context 17:53:50 boris-42: so you mean "one more time process" do a re-validate following process? 17:53:58 marcoemorais nope 17:54:19 one sec 17:54:27 boris-42: better for me to push one more wip 17:54:32 boris-42: put the comments in the review 17:54:41 marcoemorais so 17:54:42 https://github.com/stackforge/rally/blob/master/rally/benchmark/validation.py#L185-L186 17:54:57 ^ we should process here args to get ID of flavor and image 17:55:12 and we should process arguments (before running benchmark) 17:55:29 so in 2 places we will have processing of image and flavor objects 17:56:16 marcoemorais but WIP ar welcome 17:56:18 =) 17:57:06 #topic open discussion 17:57:18 any ?) 17:57:29 guys does anybody know whether there is an elegant way to suspend logs? 17:57:38 msdubov: suspend? 17:57:38 I actually have this problem in my patch 17:57:47 hughsaunders, temporarily 17:57:54 https://review.openstack.org/#/c/86956/ 17:58:00 I call Authenticate.keystone there 17:58:10 just to measure the average cloud response time 17:58:20 and I'd like to have no logs during this "helper" job 17:58:25 msdubov send argument 17:58:28 msdubov NOLOGS 17:58:32 msdubov NOLOGS=TRUE+) 17:58:38 boris-42 seems to be dirty =( 17:58:45 will require lots of changes besides 17:59:04 msdubov not sure that it requires a lot of changes 17:59:17 boris-42 Ok will try to do it simply thi s way 17:59:19 thanks 17:59:27 msdubov: you could probabbly get the logger for the appropriate module and remove the handlers? 17:59:49 use try/finally or with to ensure they get added back again.. 18:00:05 hughsaunders, How should I remove the handlers? 18:00:09 is there a special method? 18:00:24 msdubov store original LOG instance 18:00:27 msdubov replace it 18:00:31 msdubov restore 18:00:39 so we should end meeting 18:00:41 boris-42 Yep that's an option 18:00:42 ok 18:00:42 #endmeeting