14:01:31 #startmeeting isviridov 14:01:31 Meeting started Thu Nov 13 14:01:31 2014 UTC and is due to finish in 60 minutes. The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:34 o/ 14:01:34 The meeting name has been set to 'isviridov' 14:01:37 Hi all! 14:01:56 o/ 14:02:03 o/ 14:02:23 romainh: ? 14:02:44 o/ 14:03:05 isviridov: yep 14:03:08 Ok, let us start 14:03:08 o/ 14:03:26 #topic action items 14:03:35 #1 dukhlov data encryption support blueprint 14:04:31 a'm working on it now and plan to finish spec this week 14:05:12 also draft for managment API is under review now 14:05:30 it is dependency for encryption bp 14:05:41 I see #link https://review.openstack.org/#/c/133505/ 14:05:49 but it has -1 from jenkisn 14:05:53 guys, sry, coult u remind me address of our meeting page 14:05:56 yeap 14:06:00 sorry I will fix it 14:06:07 ikhudoshyn : here it is https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Nov_13.2C_2014.2C_14:00_UTC 14:06:15 isviridov, tnx 14:06:55 Ok, dukhlov leaving AI on you 14:06:55 #action dukhlov data encryption support blueprint 14:07:01 sure 14:07:09 #2 ikhudoshyn file a bug about dynamodb version support documentation 14:07:26 ikhudoshyn : any success? 14:07:29 my bad, forgot bout it 14:07:35 #help what is the idea behind storage before create_table? 14:07:38 keith_newstadt : GM 14:08:05 ajayaa : what do you mean?\ 14:08:15 morning all 14:08:59 ikhudoshyn : leaving on you as well 14:09:04 sure 14:09:09 #action ikhudoshyn file a bug about dynamodb version support documentation 14:09:18 The high level meaning of storage! 14:10:06 Is storage a high level container for tables? 14:10:27 Does it deal with amount of space being used by tables? 14:10:52 ajayaa : dukhlov seems there we need a bit more context 14:10:56 storage is like keyspace for cassandra 14:11:38 it seems ajayaa is talking about managment API bp 14:11:39 okay. But we have a high level container called tenant for tables. 14:11:59 Ilya Sviridov proposed stackforge/magnetodb-specs: Add specification for part of Management API https://review.openstack.org/133505 14:11:59 I am looking at https://review.openstack.org/#/c/133505/1/specs/kilo/approved/managment-api-for-tenent-configuration-spec.rst 14:12:04 yes, for no storage=tenant 14:12:17 now 14:12:38 but tenant is openstack-wide container 14:12:46 dukhlov_, any plans for them to diverge? 14:12:58 not for now 14:13:01 If we add storage entity, then different users could have same table name in the same tenant. 14:13:04 Am I right? 14:13:22 but I want to consider this usecase for future 14:13:52 and make implementation with possibility to ve extended 14:14:06 Can you please tell me another usecase? 14:14:20 hm.. in our current API we have 1 tenant = 1 keyspace 14:14:28 sure 14:14:42 ikhudoshyn, +1 14:14:42 but for mgmt you want it different, why? 14:14:47 so far, the use case is simply that we want to provide encrypted tenants 14:14:48 when 1 tenant can have a few storages 14:14:53 looks like YAGNI for me 14:14:58 1 more nested level 14:15:04 i don't think we have a use case for different encryption settings for tables within a tenant 14:15:25 i'd be concerned about adding unnecessary complexity... 14:15:31 thoughts? 14:15:39 keith_newstadt +1 14:15:49 but also storage can have another settings 14:16:04 lilke consystency level for example 14:16:10 at least i'd like to see data API and mgmt API consistent to each other 14:16:24 I don't sugget add comlexity now 14:16:47 dukhlov_, how could this affect existing data API? 14:17:10 I only sugget to leave posibility to improve it in fute without changins ofxisting feature 14:18:12 YAGNI 14:18:14 ikhudoshyn: for now - it don't affect API 14:18:21 i'd think consistency would be an attribute of a table, rather than a tenant or storage 14:18:41 but it is not possible for Cassandra 14:18:56 storage settings would be specifically for... storage attributes. rather than for API related attributes. 14:18:56 it is attribute of keyspace 14:19:49 we can't set quorum settings (required number of reads, e.g.) on a particular table or read/write operation? 14:20:09 we can 14:20:29 yes each query can set its tunable CL 14:20:43 but we can't set how many replicas we have 14:20:55 per table 14:21:22 i think number of replicas would be fine as a per tenant setting 14:21:37 makes setting quotas and showback easier as well 14:22:14 Can you change the config after initial setting? 14:22:39 yes agree, drafted bp use only one storage for tenant 14:23:31 no 14:23:55 only remove all storage and then initialize it again 14:24:34 should probably reject requests to POST to an existing storage 14:24:45 require explicit DELETE before a rePOST 14:24:53 yes 14:26:11 dukhlov keith_newstadt ajayaa we are at the very beginning of BP discussion, so feel free to comment the spec 14:26:33 * isviridov just remind 14:26:52 yes it would be great 14:27:06 will do 14:27:51 ajayaa : next topic? 14:28:25 let's move on 14:28:31 +1 14:28:41 #3 isviridov ikhudoshyn clarify roadmap item 14:28:58 we kinda did 14:29:11 So, before summit there was an item about DymanoDB support here https://etherpad.openstack.org/p/magnetodb-kilo-roadmap 14:29:30 ikhudoshyn : yeap, just sharing with team 14:30:31 As you know Amazon has released the new version of API with GlobalIndex support, map data type support and expression in querie... 14:31:02 So, we have defined the scope as AWS DynamoDB API 2011-12-05 version. 14:31:29 The documenntation is still available for downloading in PDF format 14:31:36 ajayaa : rushiagr_away any thoughts? 14:33:10 Ok, just let us move on. 14:33:37 ajayaa : rushiagr_away would be nice to hear from you later 14:33:52 #topic Open discussion 14:34:22 i'm working on backup/restore 4 mdb 14:34:32 https://review.openstack.org/#/c/133933/ -- it's a draft for API 14:34:47 pls review and share yr thoughts 14:35:11 we had an interesting discussion with the trove folks at the conference 14:35:15 pls note, the abocve doc is API spec only, it's not about impl 14:35:35 it occurred to us all that there will be some overlap in what we are designing 14:35:53 about to delegate lo level maintenance to trove? 14:35:54 magnetodb is a service on the data path, where trove is a db provisioning api 14:36:08 the backup api's overlap between the two 14:36:41 wondering if this is an opportunity for us to collaborate - at least to have consistent apis, possibly to use trove's cassandra support for backup/restore under magnetodb 14:37:06 it would require some additional functionality from the trove folks - e.g. per tenant/keyspace backup 14:37:20 ikhudoshyn: thoughts on that? 14:37:45 well, trove was the 1st thing I looked at 14:37:52 when working on API 14:38:11 tried to make it alike but not follow it exactly 14:38:47 do you think there are reasons to diverge from the trove api? 14:38:57 or could we consolidate on a single interface? 14:38:59 as for collaborating, 1st thing is, when will they be ready for prod use? 14:39:37 it's a good question. i don't think they have been thinking about per keyspace backup until now. so i wouldn't expect them to be moving as quickly as we are in this direction 14:40:15 but we could start working with them on the api design, with plans to either move the functinoality into trove, or for us to develop it directly into trove 14:41:07 I think I should recheck their API once again just to see if there are real blokers for us to have a single api 14:41:17 What about import/export fucntionality? 14:41:26 that was my #2 14:41:44 Hi guys. Sorry had a meeting. 14:41:50 what are the difference between the use cases for import/export and backup/restore? 14:42:04 i'm picturing import/export to be more magnetodb specific 14:42:13 the API I worked on supports backup in DB agnistic format 14:42:28 +1 14:42:29 It looks for me that we can use the same API but with somethink like a type ir strategy: backend_database_native or magentodb 14:42:41 so user coudl have all his data in json format and could download it 14:43:32 backup/restore should be operational/system, import/export is user data, logical level 14:43:36 i proposed 'strategy' param for 'create backup' call 14:43:36 generally i would use import/export when i want to get the data in a generalized format so that i could bring it into another application or db 14:43:46 charlesw: +1 14:44:10 so import/export would be magnetodb specific, where backup/restore could be via trove 14:44:27 charlesw : do you think we have to keep two APIs for that? export/import and backup/restore? 14:44:36 keith_newstadt: ? 14:44:52 When we use trove for backup, do we have to provision cassandra through trove only? 14:45:05 yes, we should have admin API 14:45:07 i don't think so 14:45:11 ajayaa, seems like yes 14:45:23 hm. why? 14:46:00 from what i remember, trove stores meta info about its deployments 14:46:04 But all deployers might not want to go with trove reasons being cassandra performance on vms with shared disk. 14:46:10 as well as we do for our tables 14:46:46 so for trove to be able to backup our C* we should pass all meta they need 14:47:35 in short, this would require a deeper integration, than just calling trove api 14:47:39 Trove relies on agent working on each Cassandra node, so you can backup only provisioned by Trove database 14:48:02 if we were to implement backup/restore without trove, would we need an agent on each node as well? 14:48:15 isviridov, +1, this is for the case of DB-aware backups 14:48:26 I feel that we could have two implementation on with trove and one without trove. 14:48:42 seems wasteful to implement it twice... 14:48:49 the 'pro' of db agnostic backup -- we don't need any additional code on DB nodes 14:49:21 ajayaa, keith_newstadt, in fact this could be chosen via strategy 14:49:25 the 'con' would be performance and backup size 14:49:28 keith_newstadt, In trove case we wouldn't have to do much, because trove will take care of it. 14:49:35 keith_newstadt, sure 14:50:07 we would need an agent on the cassandra nodes as well to do backup/restore, right? 14:50:20 (for the more performant version) 14:50:21 definitely 14:50:35 anyway, in order to use Trove's backup we shoul deploy our C* via trove 14:50:48 how would we provision that agent, if we assume we weren't using trove? 14:51:20 the same way we provision C* 14:51:35 ikhudoshyn +1 14:52:00 so we would provide instructions to the operator on how to deploy cassandra, and then our agent on top 14:52:18 and they would use heat or puppet or whatever to implement it. 14:52:20 is that right? 14:52:23 yes 14:52:25 +1 14:52:35 keith_newstadt ikhudoshyn I would suggest defining own backup/restore API and implement it to operate JSONs only. Also leave a stub for future DB-aware implementation. Just we have it in Trove we can employ it and ahve another backup/restore mechanis 14:52:52 couldn't we still use the same agent as trove, or a subset of it 14:53:13 and then it is a question of whether trove deploys that agent, or if the operator does through heat/puppet/etc 14:53:33 Merged stackforge/magnetodb: Adds Cassandra implemetation of monitoring API https://review.openstack.org/132267 14:53:47 keith_newstadt : they deploys an agent via cloud-init and prebuild images 14:53:48 once again trove-guestagent is tightly relies on the whole trove codebase as well as on trove's db with meta. its mysql for now 14:54:08 fyi, Netflix Priam has jar files inside C*, runs along with Cassandra. Could be another option 14:54:35 we should consider meeting with the trove folks to discuss 14:55:01 keith_newstadt, +1 14:55:26 as for db-agnostic backup/restore, we should talk about the functionality. sounds more like import/export, which is slightly different functionality-wise 14:55:34 I believe that Trove weekly meeting is the best option 14:55:42 e.g. import is additive rather than overwrite, doesn't support incrementals, etc 14:56:22 k. i also know the tesora folks, and could reach out to them if that would be useful 14:56:39 denis_makogon : hi 14:57:11 keith_newstadt : export/import sounds not quite right, but the reason was not divede it from backup/restore. 14:57:13 isviridov, hi 14:58:07 denis_makogon, In trove do we have an option of deploying cassandra nodes on bare metal? 14:58:31 In other words does it use ironic? 14:58:32 isviridov: ok. we can discuss the details in the context of the blueprint 14:58:33 ajayaa, no 14:58:47 ajayaa, it's up to nova 14:58:50 keith_newstadt : but with defining type or strategy for backup, we can remove it 14:59:17 ajayaa, fyi bare metal is not main steam any more - docker/lxc rules the world =) 14:59:30 * deprecate 14:59:41 denis_makogon: also not yet supported :) 14:59:49 keith_newstadt, by whom ? 15:00:10 denis_makogon, that is for the cool kids. :) 15:00:11 trove. or am i wrong? 15:00:25 Not prod-ready I believe. 15:00:46 keith_newstadt, trove would speak only to nova, but nova can run over tons of drivers including lxc/docker/ironic 15:00:56 prod ready? 15:01:41 keith_newstadr, Would you really run cassandra or any other prod application on docker, as it stands today? 15:01:54 keith_newstadt ^^ 15:02:07 * isviridov meeting time is over. But let us finish discussion if you don't mind 15:02:11 ajayaa, +1 ) 15:02:45 +1 15:03:31 btw, https://review.openstack.org/#/c/124391/ is up for review. :) 15:03:34 Ok, let us talk to Trove guys 15:03:50 and i think we should hammer out our use cases for backup/restore 15:04:07 #idea we should hammer out our use cases for backup/restore 15:04:34 #idea deprecate export/import API and use backup/restore instead 15:04:46 goals for talking to trove would be (1) can we avoid having two different apis for db backup/restore inside of openstack and (2) can we avoid having two different implementations of that api 15:05:09 #help 15:05:14 isviridov: +1 15:06:42 #action keith_newstadt isviridov ikhudoshyn clarify if we can avoid having two different apis for db backup/restore inside of openstack 15:06:43 denis_makogon, btw, does Trove have any publised spec for backup/restore API? 15:06:56 ikhudoshyn, sure it has 15:07:06 #action keith_newstadt isviridov ikhudoshyn clarify if we can avoid having two different implementations of that api 15:07:12 could you point me? 15:07:15 #link https://wiki.openstack.org/wiki/Trove/snapshot-design 15:07:22 tnx 15:07:29 Sorry guys, return back to you later 15:07:33 #endmeeting