14:01:31 <isviridov> #startmeeting isviridov
14:01:31 <openstack> Meeting started Thu Nov 13 14:01:31 2014 UTC and is due to finish in 60 minutes.  The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:34 <ikhudoshyn> o/
14:01:34 <openstack> The meeting name has been set to 'isviridov'
14:01:37 <ajayaa> Hi all!
14:01:56 <isviridov> o/
14:02:03 <achuprin_> o/
14:02:23 <isviridov> romainh: ?
14:02:44 <nunosantos> o/
14:03:05 <romainh> isviridov: yep
14:03:08 <isviridov> Ok, let us start
14:03:08 <ajayaa> o/
14:03:26 <isviridov> #topic action items
14:03:35 <isviridov> #1 dukhlov data encryption support blueprint
14:04:31 <dukhlov_> a'm working on it now and plan to finish spec this week
14:05:12 <dukhlov_> also draft for managment API is under review now
14:05:30 <dukhlov_> it is dependency for encryption bp
14:05:41 <isviridov> I see #link https://review.openstack.org/#/c/133505/
14:05:49 <dukhlov_> but it has -1 from jenkisn
14:05:53 <ikhudoshyn> guys, sry, coult u remind me address of our meeting page
14:05:56 <isviridov> yeap
14:06:00 <dukhlov_> sorry I will fix it
14:06:07 <isviridov> ikhudoshyn : here it is https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Nov_13.2C_2014.2C_14:00_UTC
14:06:15 <ikhudoshyn> isviridov, tnx
14:06:55 <isviridov> Ok, dukhlov leaving AI on you
14:06:55 <isviridov> #action dukhlov data encryption support blueprint
14:07:01 <dukhlov_> sure
14:07:09 <isviridov> #2 ikhudoshyn file a bug about dynamodb version support documentation
14:07:26 <isviridov> ikhudoshyn : any success?
14:07:29 <ikhudoshyn> my bad, forgot bout it
14:07:35 <ajayaa> #help what is the idea behind storage before create_table?
14:07:38 <isviridov> keith_newstadt : GM
14:08:05 <isviridov> ajayaa : what do you mean?\
14:08:15 <keith_newstadt> morning all
14:08:59 <isviridov> ikhudoshyn : leaving on you as well
14:09:04 <ikhudoshyn> sure
14:09:09 <isviridov> #action ikhudoshyn file a bug about dynamodb version support documentation
14:09:18 <ajayaa> The high level meaning of storage!
14:10:06 <ajayaa> Is storage a high level container for tables?
14:10:27 <ajayaa> Does it deal with amount of space being used by tables?
14:10:52 <isviridov> ajayaa : dukhlov seems there we need a bit more context
14:10:56 <dukhlov_> storage is like keyspace for cassandra
14:11:38 <dukhlov_> it seems ajayaa is talking about managment API bp
14:11:39 <ajayaa> okay. But we have a high level container called tenant for tables.
14:11:59 <openstackgerrit> Ilya Sviridov proposed stackforge/magnetodb-specs: Add specification for part of Management API  https://review.openstack.org/133505
14:11:59 <ajayaa> I am looking at https://review.openstack.org/#/c/133505/1/specs/kilo/approved/managment-api-for-tenent-configuration-spec.rst
14:12:04 <dukhlov_> yes, for no storage=tenant
14:12:17 <dukhlov_> now
14:12:38 <dukhlov_> but tenant is openstack-wide container
14:12:46 <ikhudoshyn> dukhlov_, any plans for them to diverge?
14:12:58 <dukhlov_> not for now
14:13:01 <ajayaa> If we add storage entity, then different users could have same table name in the same tenant.
14:13:04 <ajayaa> Am I right?
14:13:22 <dukhlov_> but I want to consider this usecase for future
14:13:52 <dukhlov_> and make implementation with possibility to ve extended
14:14:06 <ajayaa> Can you please tell me another usecase?
14:14:20 <ikhudoshyn> hm.. in our current API we have 1 tenant =  1 keyspace
14:14:28 <dukhlov_> sure
14:14:42 <ajayaa> ikhudoshyn, +1
14:14:42 <ikhudoshyn> but for mgmt you want it different, why?
14:14:47 <keith_newstadt> so far, the use case is simply that we want to provide encrypted tenants
14:14:48 <dukhlov_> when 1 tenant can have a few storages
14:14:53 <ikhudoshyn> looks like YAGNI for me
14:14:58 <dukhlov_> 1 more nested level
14:15:04 <keith_newstadt> i don't think we have a use case for different encryption settings for tables within a tenant
14:15:25 <keith_newstadt> i'd be concerned about adding unnecessary complexity...
14:15:31 <keith_newstadt> thoughts?
14:15:39 <ajayaa> keith_newstadt +1
14:15:49 <dukhlov_> but also storage can have another settings
14:16:04 <dukhlov_> lilke consystency level for example
14:16:10 <ikhudoshyn> at least i'd like to see data API and mgmt API consistent to each other
14:16:24 <dukhlov_> I don't sugget add comlexity now
14:16:47 <ikhudoshyn> dukhlov_, how could this affect existing data API?
14:17:10 <dukhlov_> I only sugget to leave posibility to improve it in fute without changins ofxisting feature
14:18:12 <ikhudoshyn> YAGNI
14:18:14 <dukhlov_> ikhudoshyn: for now - it don't affect API
14:18:21 <keith_newstadt> i'd think consistency would be an attribute of a table, rather than a tenant or storage
14:18:41 <dukhlov_> but it is not possible for Cassandra
14:18:56 <keith_newstadt> storage settings would be specifically for... storage attributes. rather than for API related attributes.
14:18:56 <dukhlov_> it is attribute of keyspace
14:19:49 <keith_newstadt> we can't set quorum settings (required number of reads, e.g.) on a particular table or read/write operation?
14:20:09 <dukhlov_> we can
14:20:29 <charlesw> yes each query can set its tunable CL
14:20:43 <dukhlov_> but we can't set how many replicas we have
14:20:55 <dukhlov_> per table
14:21:22 <keith_newstadt> i think number of replicas would be fine as a per tenant setting
14:21:37 <keith_newstadt> makes setting quotas and showback easier as well
14:22:14 <charlesw> Can you change the config after initial setting?
14:22:39 <dukhlov_> yes agree, drafted bp use only one storage for tenant
14:23:31 <dukhlov_> no
14:23:55 <dukhlov_> only remove all storage and then initialize it again
14:24:34 <keith_newstadt> should probably reject requests to POST to an existing storage
14:24:45 <keith_newstadt> require explicit DELETE before a rePOST
14:24:53 <dukhlov_> yes
14:26:11 <isviridov> dukhlov keith_newstadt ajayaa we are at the very beginning of BP discussion, so feel free to comment the spec
14:26:33 * isviridov just remind
14:26:52 <dukhlov_> yes it would be great
14:27:06 <keith_newstadt> will do
14:27:51 <isviridov> ajayaa : next topic?
14:28:25 <ikhudoshyn> let's move on
14:28:31 <achuprin_> +1
14:28:41 <isviridov> #3 isviridov ikhudoshyn clarify roadmap item
14:28:58 <ikhudoshyn> we kinda did
14:29:11 <isviridov> So, before summit there was an item about DymanoDB support here https://etherpad.openstack.org/p/magnetodb-kilo-roadmap
14:29:30 <isviridov> ikhudoshyn : yeap, just sharing with team
14:30:31 <isviridov> As you know Amazon has released the new version of API with GlobalIndex support, map data type support and expression in querie...
14:31:02 <isviridov> So, we have defined the scope as AWS DynamoDB API 2011-12-05 version.
14:31:29 <isviridov> The documenntation is still available for downloading in PDF format
14:31:36 <isviridov> ajayaa : rushiagr_away any thoughts?
14:33:10 <isviridov> Ok, just let us move on.
14:33:37 <isviridov> ajayaa : rushiagr_away would be nice to hear from you later
14:33:52 <isviridov> #topic  Open discussion
14:34:22 <ikhudoshyn> i'm working on backup/restore 4 mdb
14:34:32 <ikhudoshyn> https://review.openstack.org/#/c/133933/ -- it's a draft for API
14:34:47 <ikhudoshyn> pls review and share yr thoughts
14:35:11 <keith_newstadt> we had an interesting discussion with the trove folks at the conference
14:35:15 <ikhudoshyn> pls note, the abocve doc is API spec only, it's not about impl
14:35:35 <keith_newstadt> it occurred to us all that there will be some overlap in what we are designing
14:35:53 <ikhudoshyn> about to delegate lo level maintenance to trove?
14:35:54 <keith_newstadt> magnetodb is a service on the data path, where trove is a db provisioning api
14:36:08 <keith_newstadt> the backup api's overlap between the two
14:36:41 <keith_newstadt> wondering if this is an opportunity for us to collaborate - at least to have consistent apis, possibly to use trove's cassandra support for backup/restore under magnetodb
14:37:06 <keith_newstadt> it would require some additional functionality from the trove folks - e.g. per tenant/keyspace backup
14:37:20 <keith_newstadt> ikhudoshyn: thoughts on that?
14:37:45 <ikhudoshyn> well, trove was the 1st thing I looked at
14:37:52 <ikhudoshyn> when working on API
14:38:11 <ikhudoshyn> tried to make it alike but not follow it exactly
14:38:47 <keith_newstadt> do you think there are reasons to diverge from the trove api?
14:38:57 <keith_newstadt> or could we consolidate on a single interface?
14:38:59 <ikhudoshyn> as for collaborating, 1st thing is, when will they be ready for prod use?
14:39:37 <keith_newstadt> it's a good question.  i don't think they have been thinking about per keyspace backup until now. so i wouldn't expect them to be moving as quickly as we are in this direction
14:40:15 <keith_newstadt> but we could start working with them on the api design, with plans to either move the functinoality into trove, or for us to develop it directly into trove
14:41:07 <ikhudoshyn> I think I should recheck their API once again just to see if there are real blokers for us to have a single api
14:41:17 <isviridov> What about import/export fucntionality?
14:41:26 <ikhudoshyn> that was my #2
14:41:44 <ajayaa> Hi guys. Sorry had a meeting.
14:41:50 <keith_newstadt> what are the difference between the use cases for import/export and backup/restore?
14:42:04 <keith_newstadt> i'm picturing import/export to be more magnetodb specific
14:42:13 <ikhudoshyn> the API I worked on supports backup in DB agnistic format
14:42:28 <keith_newstadt> +1
14:42:29 <isviridov> It looks for me that we can use the same API but with somethink like a type ir strategy: backend_database_native or magentodb
14:42:41 <ikhudoshyn> so user coudl have all his data in json format and could download it
14:43:32 <charlesw> backup/restore should be operational/system, import/export is user data, logical level
14:43:36 <ikhudoshyn> i proposed 'strategy' param for 'create backup' call
14:43:36 <keith_newstadt> generally i would use import/export when i want to get the data in a generalized format so that i could bring it into another application or db
14:43:46 <keith_newstadt> charlesw: +1
14:44:10 <keith_newstadt> so import/export would be magnetodb specific, where backup/restore could be via trove
14:44:27 <isviridov> charlesw : do you think we have to keep two APIs for that? export/import and backup/restore?
14:44:36 <isviridov> keith_newstadt: ?
14:44:52 <ajayaa> When we use trove for backup, do we have to provision cassandra through trove only?
14:45:05 <charlesw> yes, we should have admin API
14:45:07 <keith_newstadt> i don't think so
14:45:11 <ikhudoshyn> ajayaa, seems like yes
14:45:23 <keith_newstadt> hm.  why?
14:46:00 <ikhudoshyn> from what i remember, trove stores meta info about its deployments
14:46:04 <ajayaa> But all deployers might not want to go with trove reasons being cassandra performance on vms with shared disk.
14:46:10 <ikhudoshyn> as well as we do for our tables
14:46:46 <ikhudoshyn> so for trove to be able to backup our C* we should pass all meta they need
14:47:35 <ikhudoshyn> in short, this would require a deeper integration, than just calling trove api
14:47:39 <isviridov> Trove relies on agent working on each Cassandra node, so you can backup only provisioned by Trove database
14:48:02 <keith_newstadt> if we were to implement backup/restore without trove, would we need an agent on each node as well?
14:48:15 <ikhudoshyn> isviridov, +1, this is for the case of DB-aware backups
14:48:26 <ajayaa> I feel that we could have two implementation on with trove and one without trove.
14:48:42 <keith_newstadt> seems wasteful to implement it twice...
14:48:49 <ikhudoshyn> the 'pro' of db agnostic backup -- we don't need any additional code on DB nodes
14:49:21 <ikhudoshyn> ajayaa, keith_newstadt, in fact this could be chosen via strategy
14:49:25 <keith_newstadt> the 'con' would be performance and backup size
14:49:28 <ajayaa> keith_newstadt, In trove case we wouldn't have to do much, because trove will take care of it.
14:49:35 <ikhudoshyn> keith_newstadt, sure
14:50:07 <keith_newstadt> we would need an agent on the cassandra nodes as well to do backup/restore, right?
14:50:20 <keith_newstadt> (for the more performant version)
14:50:21 <ikhudoshyn> definitely
14:50:35 <ikhudoshyn> anyway, in order to use Trove's backup we shoul deploy our C* via trove
14:50:48 <keith_newstadt> how would we provision that agent, if we assume we weren't using trove?
14:51:20 <ikhudoshyn> the same way we provision C*
14:51:35 <ajayaa> ikhudoshyn +1
14:52:00 <keith_newstadt> so we would provide instructions to the operator on how to deploy cassandra, and then our agent on top
14:52:18 <keith_newstadt> and they would use heat or puppet or whatever to implement it.
14:52:20 <keith_newstadt> is that right?
14:52:23 <ikhudoshyn> yes
14:52:25 <ajayaa> +1
14:52:35 <isviridov> keith_newstadt ikhudoshyn I would suggest defining own backup/restore API and implement it to operate JSONs only. Also leave a stub for future DB-aware implementation. Just we have it in Trove we can employ it and ahve another backup/restore mechanis
14:52:52 <keith_newstadt> couldn't we still use the same agent as trove, or a subset of it
14:53:13 <keith_newstadt> and then it is a question of whether trove deploys that agent, or if the operator does through heat/puppet/etc
14:53:33 <openstackgerrit> Merged stackforge/magnetodb: Adds Cassandra implemetation of monitoring API  https://review.openstack.org/132267
14:53:47 <isviridov> keith_newstadt : they deploys an agent via cloud-init and prebuild images
14:53:48 <ikhudoshyn> once again trove-guestagent is tightly relies on the whole trove codebase as well as on trove's db with meta. its mysql for now
14:54:08 <charlesw> fyi, Netflix Priam has jar files inside C*, runs along with Cassandra. Could be another option
14:54:35 <keith_newstadt> we should consider meeting with the trove folks to discuss
14:55:01 <ajayaa> keith_newstadt, +1
14:55:26 <keith_newstadt> as for db-agnostic backup/restore, we should talk about the functionality. sounds more like import/export, which is slightly different functionality-wise
14:55:34 <isviridov> I believe that Trove weekly meeting is the best option
14:55:42 <keith_newstadt> e.g. import is additive rather than overwrite, doesn't support incrementals, etc
14:56:22 <keith_newstadt> k.  i also know the tesora folks, and could reach out to them if that would be useful
14:56:39 <isviridov> denis_makogon : hi
14:57:11 <isviridov> keith_newstadt : export/import sounds not quite right, but the reason was not divede it from backup/restore.
14:57:13 <denis_makogon> isviridov, hi
14:58:07 <ajayaa> denis_makogon, In trove do we have an option of deploying cassandra nodes on bare metal?
14:58:31 <ajayaa> In other words does it use ironic?
14:58:32 <keith_newstadt> isviridov: ok. we can discuss the details in the context of the blueprint
14:58:33 <denis_makogon> ajayaa, no
14:58:47 <denis_makogon> ajayaa,  it's up to nova
14:58:50 <isviridov> keith_newstadt : but with defining type or strategy for backup, we can remove it
14:59:17 <denis_makogon> ajayaa, fyi bare metal is not main steam any more - docker/lxc rules the world =)
14:59:30 <isviridov> * deprecate
14:59:41 <keith_newstadt> denis_makogon: also not yet supported :)
14:59:49 <denis_makogon> keith_newstadt, by whom ?
15:00:10 <ajayaa> denis_makogon, that is for the cool kids. :)
15:00:11 <keith_newstadt> trove.  or am i wrong?
15:00:25 <ajayaa> Not prod-ready I believe.
15:00:46 <denis_makogon> keith_newstadt, trove would speak only to nova, but nova can run over tons of drivers including lxc/docker/ironic
15:00:56 <keith_newstadt> prod ready?
15:01:41 <ajayaa> keith_newstadr, Would you really run cassandra or any other prod application on docker, as it stands today?
15:01:54 <ajayaa> keith_newstadt ^^
15:02:07 * isviridov meeting time is over. But let us finish discussion if you don't mind
15:02:11 <ikhudoshyn> ajayaa, +1 )
15:02:45 <keith_newstadt> +1
15:03:31 <ajayaa> btw, https://review.openstack.org/#/c/124391/ is up for review. :)
15:03:34 <isviridov> Ok, let us talk to Trove guys
15:03:50 <keith_newstadt> and i think we should hammer out our use cases for backup/restore
15:04:07 <isviridov> #idea we should hammer out our use cases for backup/restore
15:04:34 <isviridov> #idea deprecate export/import API and use backup/restore instead
15:04:46 <keith_newstadt> goals for talking to trove would be (1) can we avoid having two different apis for db backup/restore inside of openstack and (2) can we avoid having two different implementations of that api
15:05:09 <isviridov> #help
15:05:14 <keith_newstadt> isviridov: +1
15:06:42 <isviridov> #action keith_newstadt isviridov ikhudoshyn clarify if we can avoid having two different apis for db backup/restore inside of openstack
15:06:43 <ikhudoshyn> denis_makogon, btw, does Trove have any publised spec for backup/restore API?
15:06:56 <denis_makogon> ikhudoshyn, sure it has
15:07:06 <isviridov> #action keith_newstadt isviridov ikhudoshyn clarify if we can avoid having two different implementations of that api
15:07:12 <ikhudoshyn> could you point me?
15:07:15 <denis_makogon> #link https://wiki.openstack.org/wiki/Trove/snapshot-design
15:07:22 <ikhudoshyn> tnx
15:07:29 <isviridov> Sorry guys, return back to you later
15:07:33 <isviridov> #endmeeting