17:02:50 #startmeeting Rally 17:02:51 Meeting started Tue Nov 26 17:02:50 2013 UTC and is due to finish in 60 minutes. The chair is boris-42. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:54 The meeting name has been set to 'rally' 17:04:00 Alexei_987 ping 17:04:05 pong 17:04:08 harlowja ping 17:04:28 #topic profiling status 17:04:51 Alexei_987 could you share our document and ideas around current status and problems 17:04:57 dhellmann-afk ping 17:05:19 which one? :) https://etherpad.openstack.org/p/tomograph-adjustments ?? 17:05:37 Alexei_987 yep this one 17:06:15 boden ping 17:07:48 Alexei_987 so? 17:08:35 so today I was working on planning how we can use ceilometer as a data collector/storage for our profiling data 17:09:00 Alexei_987 did you find the way to use it? 17:09:22 well I'm still working on it but I already have a rough idea of how it should be done 17:09:41 Alexei_987 do we need to make some changes in ceilometer? 17:09:47 no :) 17:09:49 Alexei_987 or we could use it out of box 17:09:57 we'll use openstack/common/ notification system to send data via RPC 17:09:59 Alexei_987 oh it's nice 17:10:09 Alexei_987 to ceilometer? 17:10:13 yes 17:10:28 we'll send raw data to it 17:10:52 and our visualization system we'll fetch data from ceilometer and handle the rest (display correct hierarchy) 17:11:19 Alexei_987 are we are going to face a problem with to much data in ceilometer? 17:11:35 theoretically we may face it 17:11:44 but it's ceilmeter's job to handle it 17:11:54 since it's supposed to be HA data collector 17:12:04 jd__ ping 17:12:23 so if we actually face such problem we'll have to work on ceilometer to improve it's performance (which is ok for me) 17:12:29 Alexei_987 do you have some data about how much data we will send to ceilometer 17:12:44 no since we don't have any working prototype for now :) 17:12:51 Alexei_987 ouch=) 17:13:01 but I'm pretty sure that we won't have any problems with that for a long time 17:13:19 it already handles a lot of data so our load won't be too much for it 17:13:48 Alexei_987 okay 17:14:00 Alexei_987 so you are going to write new backend for tomograph? 17:14:13 Alexei_987 how much it will take time? 1-2 days? 17:14:20 well it won't be a backend for tomograph 17:14:31 it will be a new library + ceilometer backend 17:14:40 Alexei_987 get rid of tomograph? 17:14:44 since tomograph won't be compatible with new data structure 17:14:54 you can consider it as tomograph 2.0 17:15:27 Alexei_987 so we are going to drop all current beckends in tomograph? 17:15:38 Alexei_987 refactor it, and add our ceilometer? 17:15:41 yes since we won't use them anyway 17:15:52 harlowja ^ 17:15:53 true, true 17:15:58 Alexei_987 okay 17:16:05 Alexei_987 I hope Tim won't be against 17:16:20 Alexei_987 so how much you are going to spend time to implement all this stuff 17:16:42 Alexei_987 ? 17:16:50 well I've underestimated it a little bit in the morning :) 17:17:06 but I guess I'll have the working profiler till the end of the week 17:17:21 Alexei_987 okay 17:17:27 so 2-3 days for new profiler + ceilometer backend 17:17:50 Okay let's then join other topics 17:17:51 have to do a lot of digging in ceilometer code 17:18:14 #topic rally benchmark egine changes 17:18:34 typo ^ 17:18:41 i can't fix it=) 17:18:44 lol 17:18:59 There are 2 main areas here 17:19:11 boden could you share details about generic cleanup? 17:20:14 boris-42 yes... The actual impl is complete, and I'm currently working on the UTs for it. In summary the cleanup runs just prior to deleting the users/projects for the benchmark and will cleanup servers, images, networks, volumes,etc.. 17:20:40 boden when you are going to finish work around UTs for this? 17:21:14 boris-42 in reality not until next week most likely... its a holiday here this week and today is my last day for the week 17:21:46 boden ok no problem, so we could expect some patches at next monday? 17:22:14 monday or tuesday most likely.. I may be some time over vacation to work on it, but not sure 17:22:28 boden btw could you just push your patch on review (without tests) just to review it? 17:23:00 boris-42 sure... I can do that before vacation --- need to run tox and clean that up 1st 17:23:09 boden thank you 17:23:44 Okay next guys is msdubov he is working on changing config of benchmark, so we will be able to run not only "continuously" task but also "periodicaly". Could you explain this change and our current status? 17:24:19 boris-42 Hi 17:25:04 boris-42 So currently Rally executes benchmarks for a given number of times according to the user settings in the benchmark config 17:25:19 Last week we have changed the format of the config, so it has become more flexible 17:25:27 and also more transparent to the end-user 17:25:45 Futher work is concentrated on 2 major features: 17:26:28 1) Implementing benchmark running for a specified amount of time. E.g. we should be able to ask Rally to load the cloud with the benchmark scenario for booting-deleting servers for 10 minutes 17:27:03 2) Implementing periodic benchmark run: this should enable the end-user to execute any benchmark scenario with given intervals 17:27:34 E.g. launch the boot-delete server scenario taking 1 minute pause after each run. 17:28:14 Finally we plan to implement running multiple benchmark scenarios in parallel so that we can consider one scenario as the main one, while the other scenario as "noise" 17:28:34 The mentioned changes are essential for implementing this stuff 17:29:07 msdubov okay so when you are going to finish all stuff around periodic running test? or is it already finish? 17:29:52 boris-42 So actually the patches for 1) and 2) seem to be ready and are pending for review 17:30:06 boris-42 as soon as they get merged I'll concentrate on parallel run 17:31:23 msdubov okay thnaks 17:31:40 #topic benchmark engines & server providers 17:32:43 benchmark engines? 17:33:00 #topic deployers & server providers 17:33:04 Sorry typo=) 17:33:23 eyerediskin could you share status of deployers & server providers? 17:34:12 there is 3 patches on review 17:34:54 and one more coming soon (image downloading for virsh provider) 17:35:41 boris-42: this one is done long time ago https://review.openstack.org/#/c/48811/ 17:36:25 eyerediskin so I should review it?) 17:36:26 lxc engine and multihost provider: https://review.openstack.org/#/c/57240/ https://review.openstack.org/#/c/56222/ 17:36:45 eyerediskin did you test LXC providers?) 17:36:51 eyerediskin I mean in real life? 17:37:11 boris-42: all 3 was tested many times with different configs 17:37:35 and some bugs was fixed since first patchset =) 17:40:34 eyerediskin okay I will review them=) msdubov Alexei_987 you should also review that patches ^ 17:40:56 boris-42: I'm reviewing stuff when I have free time :) 17:41:20 Alexei_987 if I will be such reviewer, I won't made any review=) 17:41:37 boris-42 Okay will do that tomorrow 17:42:08 #topic split deploy & benchmark workflow 17:42:12 boris-42: ok :) I'll spend at least 2 hours each day for reviews :) 17:42:27 Alexei_987 it is too much I think 1hrs is enough=) 17:42:32 boris-42: multiply all my estimates by the pow of 3.14 17:42:37 lol 17:43:11 Okay we have critical arch bug, we were not able to use deployment system of Rally for benchamrking 17:44:04 huh? 17:44:13 why not? 17:44:28 because we should specify it task.conf information about image_uuid, and flavor_id that we are not able to get before we make deployment 17:44:45 and task.conf is specified before deployment process is started =) 17:44:55 hm.. :) 17:45:04 we should have this stuff predefined 17:45:25 Alexei_987 actually there is a lot of another issues 17:45:44 e.g. we were not able to deploy openstack and run multiple benchmarks agains it 17:46:01 So we chose the way to split deploy/benchamrk parts 17:46:07 (facepalm) 17:46:09 And now Rally it 2 click 17:46:11 is* 17:46:22 yeah but it means that we don't need deploy at all 17:46:31 Alexei_987 hm why? 17:46:32 we have tripleO + fuel + devstack 17:46:40 + any other manual deploy 17:46:49 Alexei_987 it is not manual at all 17:46:56 Alexei_987 it is in 1 click 17:47:05 Alexei_987 like before 17:47:18 no.. I mean that we can just agree that we are already have Openstack running 17:47:32 and delegate deploy part to something else 17:47:38 Alexei_987 nope 17:47:58 Alexei_987 try to deploy OpenStack with DevStack in LXC containers on Amazon VMs 17:48:15 why should I? 17:48:26 I mean rally's purpose is profiling not deploy 17:48:35 Alexei_987 ehmmmm 17:48:39 Alexei_987 no 17:48:42 no? 17:48:45 Alexei_987 no 17:48:48 ok 17:49:02 Alexei_987 it is the system that makes benchmark of openstack at scale simple 17:49:12 ok so profiling + benchmark 17:49:14 ? 17:49:15 no 17:49:34 deploy + benchmark + results processing + profling 17:49:44 and there are also now another use case 17:49:52 about generating real workloads 17:49:58 IMHO too many responsibilities 17:50:05 Nope 17:50:16 ok forget it 17:50:21 let's get back to the topic 17:50:28 If I am not able to get 1k servers deployment in one click in venv 17:51:08 I don't need other parts 17:51:20 I don't need benchmarks and profiling at all 17:51:45 because deploy process is to complicated even if you are using FUEL/TrippleO/Anvil/Devstack 17:51:59 boris-42: exactly 17:52:13 So our goal is not to reinvent and make new deployer for OpenStack 17:52:16 boris-42: and you want us (2-3 devs) to make something that is more powerfull 17:52:17 just to use existing 17:53:01 and simplify and unify work with them (so to get good deployment for deployers without any knowledge about how the hell I should deploy openstack) 17:53:14 boris-42 wants to be able to configure and ignite delployment tool from rally 17:53:32 any deployment tool, eventually 17:53:37 right? 17:53:44 ogelbukh yep 17:54:00 ogelbukh by specifying all configurations that could be specifed 17:54:06 ogelbukh so to make it simple to use 17:54:12 you need good configuration model then ) 17:54:12 ogelbukh without any knowladge 17:54:27 ogelbukh I think that current stuff is pretty good 17:54:41 ogelbukh we just need to split deploy/benchamrk workflows 17:54:48 that's for sure 17:54:55 ogelbukh and I think that during this week we will finish work on that 17:55:12 ogelbukh there are just few patches that should be merged 17:55:24 looking forward to it 17:55:30 ogelbukh https://review.openstack.org/#/q/status:open+project:stackforge/rally+branch:master+topic:bp/independent-deploy,n,z 17:55:36 boris-42: 5 minutes left :) 17:55:37 we'll need this stuff really soon ) 17:56:16 #topic Okay last topic is make Rally as a Service 17:56:35 We should determine API that should provide Rally services 17:57:17 so we started this ether pad https://etherpad.openstack.org/p/rally-service-api 17:57:36 nice ) 17:57:40 I will also make some email in mailing list about this=) 17:57:57 so everybody that would like to discuss this will be able to take a part 17:58:10 Service is actually very important 17:58:22 because it is base step to make WEB UI for Rally 17:58:32 that will be our next major goal 17:58:42 #topic free discussion 17:58:43 boris-42: and many other things, I dare to say ) 17:58:48 =)) 17:59:07 I think that there will be no free discussion today=) 17:59:14 because we don't have enough time=) 17:59:20 btw, found this tool for api design: apiary.io 17:59:30 looks neat 17:59:37 #endmeeting