13:00:15 #startmeeting magnetodb 13:00:16 Meeting started Thu Oct 9 13:00:15 2014 UTC and is due to finish in 60 minutes. The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:21 The meeting name has been set to 'magnetodb' 13:00:29 Who is here? 13:00:31 o/ 13:00:33 hello everybody 13:00:41 achudnovets, hello alex 13:00:53 hello! 13:01:13 ajaya should be around too. Wait, I'll poke him 13:01:16 SpyRay, hello 13:01:31 Hi All! 13:01:38 rushiagr, great. He has several items in agenda 13:02:03 Hello ajayaa 13:02:20 Hi isviridov. 13:02:53 Ok, I think we can start 13:03:05 Here is today agenda #link https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Agenda 13:03:25 The AIs from last meeting #link http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.html 13:03:37 #topic Go through action items 13:04:02 dukhlov ikhudoshyn review spec for https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check 13:04:24 dukhlov, ikhudoshyn any success with it? 13:04:29 yup 13:04:51 in fact there was only one open question for me 13:05:05 response in case of unhealthy state 13:05:12 yes 13:05:28 now it is returned 503 13:05:31 i would really like errcode other than 200 in the case 13:05:31 ikhudoshyn 503 13:05:41 and json response 13:05:47 aostapenko, is it official? 13:05:48 with detailes 13:06:03 ikhudoshyn, yes. and text/plain body 13:06:04 aostapenko, great if so 13:06:18 no objections from my side than 13:06:35 but if we decided that it is "simple" healthcheck 13:07:07 ikhudoshyn, dukhlov please add your 'approved' to spec, just to keep it consistent 13:07:11 and we have no plans to parse json to get detailes 13:07:19 there is simple healthcheck, and healthcheck that checks subsystems. see specs please 13:07:34 isviridov, where to? 13:07:48 https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-health-check 13:07:50 isviridov, i mean approve? 13:07:59 maybe it is reasonable to return more detailed status via status code? 13:08:00 ikhudoshyn, https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-health-check#Specification_status 13:08:40 I mean define a few codes for different cases 13:08:50 isviridov, tnx 13:09:01 dukhlov, for example? 13:09:54 dukhlov, are we to provide automatic recovery? if no then I don't see a usecase for that 13:10:31 I think 200 and 503 are enough 13:10:44 that is ok, but In this case It is not clear for me why are we sending json with detailes? 13:11:28 aostapenko, +1 13:12:00 dukhlov, plain/text. Just to provide additional info for administrator 13:12:46 aostapenko: ah, ok, I fogot that it is not REST call now 13:13:13 Ok, let us move on 13:13:17 dukhlov, ok? 13:13:21 isviridov, +1 13:13:24 ok 13:13:29 ikhudoshyn dukhlov review https://wiki.openstack.org/wiki/MagnetoDB/specs/rbac 13:14:09 Let me share my feedback also 13:14:15 as for that, I'd love to see full list of permissions 13:14:32 * isviridov ikhudoshyn is faster 13:14:39 yeap 13:14:49 Apart from permission based on roles and projects(tenants) do we need anythin else? 13:15:10 ajayaa, I dont see any 13:15:22 * anything 13:15:39 ajayaa, could you enumerate the list of actions we are going to restrict 13:15:49 Then the permission listed in the spec are enough. 13:15:52 Okay. 13:16:04 LGTM in general, but i have seen there some kind of definition of simple language for rights 13:16:19 We can restrict all actions. 13:16:40 If you want to make an api public then just don't put any rule for it policy.json file. 13:16:52 I will provide an example of that in the commit log. 13:16:57 role:admin and project_id:(project_id)s 13:17:09 now we have "AND" 13:17:19 Do we plan to have "OR" 13:17:33 dukhlov, yes. 13:17:41 It is already there in policy. 13:17:54 The openstack common code provided already does this. 13:18:12 ajayaa, yes. But the actions will be coded, so having a list will help to document exact maning 13:18:30 *naming 13:18:37 Okay. I will modify the spec to reflect the same. 13:18:54 Thank you 13:19:37 ajayaa: openstack common code? which library? where can I find it? 13:20:34 If you see my patch there itself, magnetodb/openstack/common/policy.py 13:21:02 That is the common code shared by every project which does role based policy checking. 13:21:28 ajayaa, another think could you please keep the template structure. I mean https://wiki.openstack.org/wiki/MagnetoDB/specs/template 13:22:15 isviridov, I have missed some points in the template, I think. 13:22:26 I will update the spec. :) 13:22:44 ajayaa, we're trying to get rid of *.openstack.common when possible 13:23:07 ajayaa: cool, thank you 13:23:13 if u know a library where this stuff resides could u pls use it? 13:23:31 ikhudoshyn, There is no library for it as of now. 13:23:41 ajayaa, ok ic 13:23:46 In future oslo people could include it, but I am sure. 13:23:52 ikhudoshyn: https://github.com/openstack/oslo-incubator/blob/master/openstack/common/policy.py 13:24:23 * isviridov came back 13:24:25 dukhlov, ikhudoshyn next point? 13:24:32 +1 13:24:36 achudnovets, are we to see oslo.incubator on pypi? 13:24:47 achudnovets, in some future? 13:25:25 It may become a library some day :) 13:25:35 :) tnx 13:25:41 ikhudoshyn, lets move on 13:25:52 isviridov start create spec repo like https://github.com/openstack/nova-specs 13:25:52 ikhudoshyn, Before becoming library common code go thorough oslo.incubator. 13:25:57 s/ikhudoshyn/isviridov 13:26:26 It is for me. So no progress here yet 13:26:33 ajayaa, that makes sense, i just dont really like copypased code 13:26:36 But we will have it for kilo 13:27:00 isviridov, we're ;looking forward )) 13:27:13 ikhudoshyn, :) 13:27:15 ominakov describe security impact here https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api 13:27:28 ominakov, around? 13:28:11 Ok.We have other point to discuss 13:28:18 ikhudoshyn, If everybody feels like we should wait for policies to become a library, then I am fine with it. :) 13:28:32 ajayaa, no way) 13:28:45 isviridov: +1 :) 13:29:10 ajayaa, how long it can take? 13:29:23 isviridov, I have no idea. 13:30:19 We had the same with notifications, I don't think that it should stop us. 13:30:51 Or even more, it is a greate chance to contribute to oslo 13:31:02 Yes. besides every other project is reusing that piece of code. 13:31:29 ajayaa, yeap 13:31:49 Ok next big topic looks like from you 13:31:54 #topic Decide how to do metering. Define a clear boundary between monitoring api and Ceilometer metering through Magnetodb notifications. 13:32:13 #topic Decide how to do metering 13:32:35 Do we have a basic idea of what to meter? 13:32:56 besides byte usage and #rows in a table 13:33:08 we have docs describing key metrics 13:33:23 have that info been shared with the community? 13:33:36 I don't see it. 13:34:12 we should put it in the blueprint 13:34:23 keith_newstadt, 1 sec 13:34:52 Here it is #link https://docs.google.com/a/mirantis.com/spreadsheets/d/1tYvgCSvkcOVED46MX8qSlUyrhNhHlyTrVkX7AXP-XR4/edit#gid=0 13:35:18 Here is the list 13:35:26 ajayaa, does it work for you? 13:35:45 yep. I can see it. 13:37:15 So the data with Source API==KVaaS API is expected to be collected with monitoring API 13:38:11 isviridov, and everything other is left for ceilometer? 13:38:40 Ceilometer would only consume notifications as of now. 13:39:00 ajayaa, what about pooling data? 13:39:07 If we are going to do metering through ceilometer then we should emit notifications containing these information. 13:39:37 I talked with ceilometer devs and they are okay with notifications but not polling. 13:39:49 ajayaa, I mean pollster http://docs.openstack.org/developer/ceilometer/contributing/plugins.html#pollster 13:40:36 What the reasoning to work only with notifications? 13:41:50 ikhudoshyn, actually all can be sent to celiometer, the question is how and if it is needed there at all 13:42:06 isviridov, no dependency on code of other modules. 13:42:31 services* 13:42:43 ajayaa, got you 13:43:17 ajayaa, yeap it is a big question for us as non integrated project. We have to figure out how we can go here 13:43:51 Ok, let us summarize 13:43:54 We could have some code running periodically which would send notifications. 13:44:24 #info celiometer team prefers notifications 13:45:31 ajayaa, are you ok with a list of metrics or have ideas what we can add? 13:45:58 isviridov, I will go through the list in detail and let you know. 13:46:22 #action ajayaa review current list of metrics 13:46:28 Anything else> 13:46:29 ? 13:46:42 isviridov, move on? 13:46:48 ajayaa, keith_newstadt move on? 13:46:57 okay. 13:47:08 #topic UUID for a table 13:47:43 The need for this right now is in ceilometer which needs a field resource_id. 13:47:59 ajayaa, I believe celiometer is a big topic, let us continue offline. But very appreciate your work! 13:48:03 which should be unique per resouce which is being measured. 13:48:27 okay. 13:49:09 Also UUID would help in making our apis more openstack way. 13:49:13 I personally would like to see UUID for table 13:49:32 dukhlov, ikhudoshyn, charlesw any thoughts? 13:49:34 isviridov, yes. 13:49:37 +1 13:49:58 isviridov, where exactly u want to see them? 13:50:12 As table attribute 13:50:26 is it only about haivng them in table_info or u expect to expose it? 13:50:38 like in resource url? 13:50:55 ikhudoshyn, unless exposed what value would it add? 13:51:29 ajayaa, that's what I tey to figure out) 13:51:51 table name + project id is already unique. What's the purpose for uuid for table? 13:52:08 charlesw, +1 13:52:17 charlesw +1 13:52:18 charlesw, table can be recreated 13:52:29 aostapenko +1 13:52:34 and it will be another resource 13:52:40 I was waiting for you to tell this. :) 13:53:00 how would the user use the uuid? 13:53:11 table name + project id is already unique in scope of magnetodb 13:53:43 i'm trying to understand the use case we'd be solving for 13:53:47 dukhlov, but not in scope of OpenStack. If we consider a table as a resource 13:54:05 sould this ID be unique resoure for monitoring in scope of all ceilometer resoures being monitored 13:54:06 ? 13:54:56 dukhlov, yes. At least for one service. 13:57:54 It's not a problem to add uuid just for ceilometer. but it will be openstack style if we expose it 13:58:32 HEAT has similar story. Implementing AWS CloudFormation API they have as stack name as identifier, but added UUID as well 13:58:34 http://developer.openstack.org/api-ref-orchestration-v1.html 13:59:10 It looks like we don't have an agreement here. Let us return back to it offline or in ML 13:59:17 so for ceilometer we could have table name + uuid as a resource id 13:59:20 Then we need to change MDB resource url to use uuid instead of table name. Different than Dynamo 13:59:51 charlesw, OS REST API already differs from Dynamo one 14:00:06 but this is a topic to discuss anyway 14:00:25 We are run out of time 14:00:40 #topic Juno delivery status overview 14:01:03 The current rc1 is the last version before juno release 14:01:51 I've created kilo and we can start with suggesting BPs there https://launchpad.net/magnetodb/kilo 14:02:15 Thank you for attending the meeting. 14:02:20 #stopmeeting 14:02:31 thanks all 14:02:35 isviridov: endmeeting :) 14:02:41 #endmeeting