20:00:07 <vipul> #startmeeting trove
20:00:07 <openstack> Meeting started Wed Aug 21 20:00:07 2013 UTC and is due to finish in 60 minutes.  The chair is vipul. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:10 <openstack> The meeting name has been set to 'trove'
20:00:17 <KennethWilke> howdy
20:00:18 <imsplitbit> o/
20:00:22 <vipul> o/
20:00:24 <dmakogon_> hi 2 all
20:00:25 <amytron> o/
20:00:32 <dmakogon_> o/
20:00:48 <robertmyers> o/
20:00:49 <esp> o/
20:00:59 <vipul> #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting
20:01:08 <pdmars> o/
20:01:10 <kevinconway> \-\0/-/
20:01:12 <SlickNik> o/
20:01:27 <vipul> #topic action items
20:01:30 <cp16net> o^/
20:01:35 <cweid> o/
20:01:43 <vipul> imsplitbit: first one is you
20:01:48 <vipul> imsplitbit to move his clustering reviews to a feature branch
20:02:03 <grapex> o/
20:02:03 <imsplitbit> yeah I've got the clustertype stuff out there for review
20:02:13 <vipul> i assume this was for the cluster api itself maybe
20:02:14 <vipul> ?
20:02:18 <imsplitbit> I've created a feature branch and I'm moving my current work for cluster api into it
20:02:21 <datsun180b> o7
20:02:21 <imsplitbit> yes
20:02:24 <vipul> cool
20:02:25 <imsplitbit> I haven't pushed it up yet
20:02:37 <dmakogon_> can it be extended ?
20:02:37 <imsplitbit> I would love some feedback on the clustertype
20:02:48 <imsplitbit> both for trove and trove client
20:02:51 <vipul> #action imsplitbit to push up cluster api to feature branch
20:03:02 <vipul> imsplitbit: Yea i've been meaning to find some time
20:03:07 <vipul> i'll look this week.
20:03:22 <SlickNik> Same here
20:03:26 <vipul> i did browse over it though...
20:03:27 <imsplitbit> kk thx!
20:03:42 <vipul> next item was hub_cap..
20:03:45 <vipul> i guess we skip him
20:03:50 <vipul> #action hub_cap to find out what happens w/ feature based reviews that land after FF
20:03:58 <vipul> next.. SlickNick
20:04:02 <SlickNik> Yeah, I made a couple of changes to the devstack review.
20:04:12 <SlickNik> But I have to yet make the default role change
20:04:22 <SlickNik> So I'm going to action this one again for myself.
20:04:23 <SlickNik> #action SlickNik update devstack review to add role to default devstack users.
20:04:27 <vipul> cool..
20:04:29 <vipul> thanks!
20:04:36 <vipul> and that is all for actions
20:04:45 <arborism> doh! meant to comment on cluster stuff, was afk for a minute or so :X
20:04:46 <vipul> #topic Automated Backups Design
20:04:57 <vipul> arborism: that'll be after this topic
20:05:05 <vipul> cp16net: wanna take it away?
20:05:13 <cp16net> yes... thx
20:05:28 <cp16net> so this name i think is a little off now that i have more defined
20:05:33 <cp16net> its more about scheduled tasks
20:05:45 <vipul> #link https://wiki.openstack.org/wiki/Trove/scheduled-tasks
20:06:02 <cp16net> this will require a scheduler  that will send messages to the guests to rn the tasks
20:06:24 <cp16net> these tasks could be anything
20:06:44 <cp16net> but initially we will have backups
20:06:59 <cp16net> this could be extended to allowing updates to pacakges for the customer
20:07:01 <arborism> theoretically, the guest upgrade blueprint could find some usefulness as a scheduled task as well, no?
20:07:12 <cp16net> like mysql/guest or other packages
20:07:29 <vipul> yep i would think we'd be able to do that
20:07:30 <cp16net> it surely can
20:07:37 <arborism> niiiiice
20:07:51 <SlickNik> @cp16net: will the guest be agnostic about these tasks?
20:08:28 <cp16net> SlickNik: the guest will be able to handle them
20:09:00 <cp16net> the idea is that the scheudler will send a task to the guest to act on
20:09:10 <cp16net> that guest will complete that task and report back on it
20:09:19 <SlickNik> cp16net: Sorry if I was a bit unclear. Meant to ask whether the guest would know the difference between whether a task was part of a schedule or not?
20:09:20 <cp16net> that its complete and such
20:09:28 <SlickNik> Or does it look like just another task to the gues
20:09:31 <vipul> cp16net: so what does it mean for a maintenance window
20:09:34 <SlickNik> guest*
20:09:44 <vipul> like does the guest know not to accept any more 'tasks' ?
20:10:04 <cp16net> SlickNik: i think it would be able to tell if it was a scheudled
20:10:16 <grapex> cp16net: What would the distinction be?
20:10:28 <cp16net> because the guest needs to report back saying that the task it was given is complete
20:10:28 <grapex> Or rather, what value does having that distinction give us?
20:10:55 <grapex> Seems like a typical "call". Or a longer running task issued via "cast" which the guest then updates Trove on
20:11:03 <cp16net> vipul: the maintenance window is decalared by the customer of when they would want these schedules to run
20:11:05 <SlickNik> So then do we need a separate scheduler component? Or can the guest just run the task based on the schedule of the task?
20:11:10 <grapex> which currently happens through the database, but will possibly be over RPC in the future via conductor or something.
20:11:33 <vipul> kinda agree with grapex.. seems unnecessary for the guest to be aware or differentiate how an request to it originated
20:11:41 <cp16net> grapex:  this is a good point that we need bidirectional comm between the guest and system
20:11:51 <cp16net> i was thining that the conductor could handle some of this
20:11:56 <grapex> SlickNik: The guest has to schedule stuff it makes it harder to manage if things begin to fail or die
20:12:06 <grapex> cp16net: Possibly
20:12:08 <cp16net> but that is just a dream atm
20:12:10 <grapex> but for this conversation
20:12:22 <grapex> let's assume that the guest has a decent way to pass back info to Trove
20:12:35 <grapex> cp16net: I have a feeling the first phase of conductor won't take too long
20:12:36 <cp16net> SlickNik: the idea i have is that theere is a new service running as the scheduler
20:12:50 <dmakogon_> i thinks the best way it to pass data through DB
20:13:11 <vipul> dmakogon_: that's how it's done now
20:13:19 <cp16net> grapex: yes i agree that because its just a dream i have
20:13:23 <grapex> dmakogon_: Maybe- we could create a strategy for using the DB still if people want to-
20:13:35 <vipul> yea it should be configuratable i suppose
20:13:35 <grapex> let's save the talk of sending messages back on the guest for when we discuss Conductor later
20:13:45 <cp16net> have the guest agent have different stragetys?
20:13:53 <cp16net> ew.. spellign
20:14:17 <vipul> let's table that for now.. talk about automated tasks now
20:14:28 <dmakogon_> and choosing strategy would be configurable ??
20:14:29 <vipul> a maintentenance window to me seems like a time the guest should not be able to do things that may be long running
20:14:30 <cp16net> so lets bring this back to the scheduled task
20:14:49 <vipul> like i want to upgrade my guest agent during a time window..
20:14:50 <cp16net> its going to handle scheudling a task on behalf of the customer
20:15:03 <vipul> so i can be sure a backup isn't runnign when i take it down
20:15:06 <SlickNik> cp16net: I'm still working through the pro's and cons of having a separate scheduler.
20:15:20 <grapex> vipul: Ah
20:15:34 <grapex> Well, we already have code to see if an action in trove can be performed
20:15:44 <dmakogon_> scheduler should be done like it done in nova
20:15:50 <dmakogon_> or am i wrong ?
20:16:07 <SlickNik> nova scheduler is something different.
20:16:09 <vipul> grapex: Yea i'd wnat to extend that to take the maintenance window into account i susppose
20:16:10 <cp16net> dmakogon_: thats the idea
20:16:15 <grapex> vipul: What may be possible is to query to see if the guest or instance is ready at routine intervals and then upgrade if possible
20:16:39 <grapex> dmakogon_: I think we should take whatever is applicable from Nova
20:16:55 <SlickNik> grapex / dmakogon_: I don't think that nova does time based scheduling.
20:16:57 <vipul> grapex: that could work as well
20:17:10 <key2> dmakogon_: Can you create the blueprint?
20:17:13 <vipul> SlickNik: agreed, everything time based just is a periodic task within exisitng services
20:17:26 <grapex> So
20:17:43 <dmakogon_> key2: i could do that
20:17:43 <grapex> cp16net: do you see a distinction between what you propose and these time based calls to the guest?
20:18:12 <grapex> key2 dmakogon_: I'm not sure if there's a dictinction between what you're suggesting and the scheduled tasks blueprint
20:18:13 <vipul> grapex: cp16net i do see one diff... nova doens't take inot account a user's specified time
20:18:13 <cp16net> i'm a little confused between what the question i am reading
20:18:15 <SlickNik> why can't the scheduled task info just be stored on the guest.
20:18:31 <datsun180b> Because containers aren't reliable for keeping time
20:18:32 <cp16net> oh i see what you mean
20:18:44 <SlickNik> And the guest can decide when to run based on a periodic task, this time based info, and maintenance window info.
20:18:45 <redthrux> and we don't want the guest to grow larger and larger having to track things
20:18:45 <grapex> SlickNik: Let's say the guest dies-
20:18:53 <cp16net> we dont want to make the guest any more complicated
20:19:02 <cp16net> we rather keep it in the infra
20:19:04 <SlickNik> Well, if the guest dies it can't run anything anyway.
20:19:06 <grapex> SlickNik: Maybe the guest could send back in it's heart beat if it has resources on hand to perform tasks
20:19:21 <cp16net> this needs to be able to handle different senarios
20:19:25 <vipul> you'd have to give the guest access to the database
20:19:37 <vipul> where will it configure itself from
20:19:44 <imsplitbit> NOOOOOOOOOOOOOOO
20:19:47 <imsplitbit> :)
20:20:02 <vipul> but we should look at simplifying this.. it may very well be create a crontab entry on the guest
20:20:18 <vipul> but what drives that might be some trove service
20:20:19 <cp16net> one of the ideas here is that its plugable with diffrernt strategies
20:20:29 <key2> well. let's consider how we different from Nova in terms of requirements to the module
20:20:33 <esp> yeah I think having a dedicated scheduler makes sense.
20:20:42 <SlickNik> A separate scheduler is a single point of failure for jobs across multiple guests.
20:20:44 <grapex> So the issue is do we want to make the guest store this information on scheduling and also have to be in charge of cron
20:20:51 <cp16net> vipul: i think thats a bad idea to have cron running on the geust
20:20:52 <kevinconway> vipul: wouldn't a cron schedule on the guest make it harder to implement the scheduler pause you want to introduce for maintenance windows?
20:21:03 <cp16net> it should be centalized
20:21:14 <redthrux> (and clusterable)
20:21:25 <cp16net> redthrux: +1
20:21:27 <imsplitbit> if it is centralized then clusterable is a requirement
20:21:30 <vipul> kevinconway: point..
20:21:34 <key2> redthrux: +1
20:21:52 <key2> I desperately want Cassandra ))
20:22:01 <esp> SlickNik: not necessarily
20:22:12 <dmakogon_> Cassandra is easy clusterable
20:22:14 <SlickNik> cp16net: you can't really control running _only_ in maintenance windows if it's centralized.
20:22:29 <dmakogon_> key2: but the point is scheduler for now
20:22:34 <grapex> SlickNik: Is it because the central point wouldn't know if the maintenance window is happening?
20:22:34 <cp16net> SlickNik: its the only way you can if the customer is to define when the window is
20:22:36 <SlickNik> You don't know if the guest / network is down when you schedule the cast.
20:22:37 <imsplitbit> well lets not get bogged down too heavy in impl details
20:22:52 <grapex> SlickNik: We have heart beats though, so we should know
20:22:55 <cp16net> SlickNik: there is an api that the customer can define what the window will be
20:22:58 <SlickNik> So when the guest picks the message up, it _might_ well be out of the window.
20:23:07 <key2> dmakogon_: I mean we should keep clustering in mind
20:23:25 <dmakogon_> key2: yes, it's really true!
20:23:27 <grapex> SlickNik: I am expecting the latency to be short enough that won't be a problem
20:23:30 <vipul> You should only send messages to services that are up
20:23:31 <grapex> Maybe I'm assuming too much
20:23:40 <cp16net> SlickNik: you bring up a good point tho
20:23:50 <cp16net> SlickNik: there has to be a fuzzy window
20:23:50 <SlickNik> So what if you send one message after another.
20:23:54 <SlickNik> Guest is up
20:23:56 <grapex> SlickNik: I know at Rack, the latency of messages isn't more than a second or so
20:23:58 <cp16net> it can not be exact
20:23:58 <kevinconway> grapex: could always treat it like medication. take as soon as possible or just the next dose. whichever is soonest.
20:24:14 <SlickNik> But the first action takes a long time to complete, so that when the guest picks the second message, it's out of the window.
20:24:37 <grapex> SlickNik: Maybe what's needed are TTLs and gauranteed delivery
20:24:46 <SlickNik> You can't guarantee maintenance windows unless you build that logic _into_ the guest.
20:24:51 <vipul> That's logic that can be built into the scheduling hting to only send it the second message after you know the first maintennace task is done
20:24:52 <grapex> So if the scheduler makes the guest do something, and it doesn't, the request is cancelled
20:25:10 <vipul> grapex: that's assuming syncronous
20:25:19 <grapex> SlickNik: There could also be time windows sent- so the guest would know if it's past the given Window don't bother
20:25:23 <redthrux> +1 grapex - easily done with messages in if you are using  rabbitmq
20:25:29 <vipul> it seems like the scheduler component will need to keep task status in mind.. and if it does you can solve for these things
20:25:29 <SlickNik> Vipul, what if the guest is already taking a backup based on a user call.
20:25:36 <kevinconway> could scheduled tasks come with an expiry time where the guest will refuse it with knowledge that another task is coming?
20:25:41 <vipul> that's somethign the scheduler shoudl be aware
20:25:46 <vipul> you shoudln't blindly schedule things
20:25:46 <cp16net> vipul: grapex: SlickNik: should we have a meeting outisde of this meeting on this?
20:25:53 <vipul> whether you're in a maintenance window or not
20:25:57 <vipul> cp16net: yes
20:26:01 <SlickNik> Yes, please
20:26:05 <SlickNik> cp16net: ^^^
20:26:08 <vipul> moving on.. evryone.. look over the Design please
20:26:08 <key2> cp16net: yes
20:26:18 <vipul> let's try to meet again
20:26:22 <vipul> this week or next?
20:26:23 <cp16net> i'd gladly talk more but we do have more to talk about orry
20:26:30 <cp16net> this week perferably
20:26:39 <vipul> let's throw out some times
20:26:53 <vipul> tomorrow at 2PST?
20:27:04 <cp16net> we could chat tmorrow at this same time
20:27:16 <SlickNik> Works for me
20:27:16 <imsplitbit> I can't make that but don't hold it up on me
20:27:20 <cp16net> 3cst?
20:27:28 <vipul> ok 1pst tomorrow 3cst
20:27:31 <vipul> done
20:27:45 <vipul> #topic Trove API section for DB type selection
20:27:59 <vipul> imsplitbit:  is this you?
20:28:10 <imsplitbit> no
20:28:14 <dmakogon_> me
20:28:21 <vipul> go dmakogon_
20:28:48 <dmakogon_> what the idea, we should provide specific chose to user for service type
20:29:23 <dmakogon_> it could be setted by config, or be stored at DB and be manually added to it
20:29:57 <vipul> service_type is something that we allow user to specify
20:30:00 <vipul> in the create instance call
20:30:09 <dmakogon_> in this way every one could extend initail configs for trove and build custom images with differernt DBs
20:30:13 <vipul> with that, today you can support >1 type of db in trove
20:30:23 <dmakogon_> for now it specifies only one type
20:30:30 <dmakogon_> it's not normal
20:30:34 <vipul> in the config? yes
20:30:51 <SlickNik> dmakogon_: the trove config only specifies the _default_ service type.
20:30:54 <dmakogon_> trove should do it's dinamically
20:30:57 <vipul> that becomes the default service_type ... BUT if you had another entry in service_images it would honor that
20:31:32 <SlickNik> You can still have other service types that you can explicitly pick during the instance create call.
20:31:36 <arborism> One thing to consider, is that given you'll likely want specific flavors for different service_types, how do you guarantee such affinity? You could let the flavor drive the service_type (e.g. mysql.xl)...
20:31:41 <dmakogon_> my point is to extend API for adding new dinamic parameter
20:31:47 <dmakogon_> service_type
20:31:58 <vipul> arborism: https://blueprints.launchpad.net/trove/+spec/service-type-filter-on-flavors
20:32:00 <dmakogon_> and it's should be done
20:32:03 <SlickNik> arborism: good point. there's a later topic scheduled to discuss that very thing ^^^
20:32:18 <grapex> arborism: I'd rather have some new API resource analogous to Nova images that let a user enumerate service types.
20:32:19 <vipul> dmakogon_: so a management API
20:32:29 <cp16net> dmakogon_: could there be an extension for this?
20:32:31 <vipul> yes! grapex we disucssed this a while ago
20:32:34 <kevinconway> grapex: +1
20:32:36 <vipul> GET /servicetypes
20:32:54 <grapex> vipul: Sorry... good we agree though. :)
20:33:00 <dmakogon_> cp16net: i think we could do that
20:33:09 <SlickNik> I like that idea, grapex / vipul
20:33:23 <vipul> grapex: no worries.. i just mean we should revive that discussion
20:33:31 <vipul> it was proposed by demorris a while ago.. then sort of died
20:33:45 <vipul> but since ther is a lot of itnerest in support >1 service type.. we kinda need the API
20:33:48 <dmakogon_> it wont die
20:34:10 <vipul> dmakogon_: So i think what you want is a management api to add new service types.. please file a BP
20:34:11 <dmakogon_> i will try to make a propose at wiki
20:34:20 <dmakogon_> ok
20:34:20 <cp16net> grapex: so that the maintainer could disable types or set one as default?
20:34:39 <vipul> done? moving on
20:34:46 <SlickNik> good with it
20:34:49 <vipul> #topic clustering API update
20:34:52 <grapex> cp16net: Sure
20:34:54 <arborism> dmakogon_: While spec'ing, can you also take into consideration versioning? i.e. mysql-5.5 vs. 5.6
20:35:11 <dmakogon_> arborism: ok
20:35:24 <vipul> dmakogon_: you wanted to chime in here?
20:35:47 <kevinconway> arborism: nova images handle the same thing
20:35:48 <key2> arborism: anything else beside version?
20:35:50 <dmakogon_> but it although makes API more complicated
20:36:08 <kevinconway> ubuntu 12 vs ubuntu 13 are just different images
20:36:48 <vipul> anyone have anythign to say about clusterin API?
20:36:53 <arborism> I wasn't advocating for anything, I was just mentioning that while writing out the specs, to consider the implications of wanting multiple versions of a service_type available. How it's impl'd/handled is up in the air.
20:36:57 <arborism> vipul: Yes
20:36:57 <dmakogon_> kevinconway: should you propose still image for it
20:37:20 <vipul> arborism: dmakogon_: i'll try to find an old wiki for it
20:37:21 <dmakogon_> vipul: i have
20:37:23 <arborism> So given the API Ref, I'm not sure I see how it will work in the future w/ the inevitable parameter groups and region awareness requirements
20:37:31 <arborism> (regarding Clustering API)
20:37:34 <vipul> go for it..
20:38:08 <dmakogon_> if, in future, trove will have multiple service support
20:38:46 <vipul> yes..
20:38:50 <dmakogon_> we sould build flexible clustering API that will be applicapable  for all
20:38:55 <dmakogon_> sorry for slow typing
20:39:02 <vipul> no worries :)
20:39:06 <imsplitbit> I believe that's what we're attempting to do
20:39:15 <imsplitbit> we're trying to make it as open as possible
20:39:30 <dmakogon_> that is why, although Trove API(single-node) should be changed
20:39:46 <vipul> #link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API-Using-Instances
20:39:56 <dmakogon_> because of alot of NoSQL doesn't support ACL at db-layer
20:40:02 <vipul> dmakogon_: after lookign at that proposal.. do you think there are things that don't fit with other dbs?
20:40:11 <arborism> vipul: redis
20:40:15 <dmakogon_> cassandra
20:40:16 <vipul> arborism: :P
20:40:26 <dmakogon_> alot of dbs
20:40:29 <vipul> are we going to bring up users again
20:40:29 <imsplitbit> where?
20:40:31 <imsplitbit> how?
20:40:36 <imsplitbit> lol
20:40:39 <imsplitbit> please god no
20:40:39 <arborism> can i elaborate on redis?
20:40:42 <imsplitbit> sure
20:40:44 <kevinconway> even i don't want to talk about users anymore
20:40:44 <arborism> without users ;)
20:40:53 <imsplitbit> please proceed
20:40:59 <arborism> Say I have 3 DCs, and I want a Redis Master in each
20:41:04 <dmakogon_> we don't need users in NoSQL
20:41:06 <arborism> I'll use consistent hashing client side
20:41:10 <arborism> to pick
20:41:25 <arborism> How, with the clustering api, will I be able to add a read slave
20:41:36 <arborism> picking whether I want to daisy chain, or connect directly to master
20:41:47 <arborism> plus, choose the ability to accept explicit writes on a slave (aka readonly)
20:42:07 <imsplitbit> I am not sure where in the clustering api it doesn't allow you to do that
20:42:07 <arborism> Because as a whole, I'd logically consider the entire deployment a cluster, but with the api spec
20:42:27 <arborism> There's no "nodeType"
20:42:30 <arborism> only cluster type
20:42:47 <vipul> i think that's what the clusterConfig is for
20:42:58 <vipul> you specify the primary.. at least in that example
20:43:08 <vipul> we may need to extend what goes in there based on service type
20:43:11 <arborism> well, read replica uses primary, add a node doesn't
20:43:16 <imsplitbit> there is a concept of roles within thee clustering api
20:43:47 <arborism> vipul: but doesn't clusterConfig end up becoming a parameter group?
20:44:29 <SlickNik> arborism: can you explain what you mean by parameter group?
20:44:38 <SlickNik> (or link to something that does, please)
20:44:45 <arborism> e.g. key-value-pairs related to the service_type
20:44:50 <arborism> a la, conf pairs
20:45:14 <vipul> arborism: i guess it would become that if we allowed it to change
20:45:42 <vipul> maybe what we need is a concrete API example that will do what you want..
20:45:48 <arborism> Let me paste out a couple of things I wrote, then add a quick comment:
20:45:56 <arborism> > In "Create Replication Set: (No previous db instance, fresh)", should be able to specify flavorRef per Node.
20:45:59 <vipul> and we should compare that with what we've come up with so far
20:46:06 <arborism> > "Create Replication Set" is a POST to /clusters, but "Add Node" is a PUT to /clusters/{cluster_id}/nodes, this seems inconsistent.
20:46:07 <arborism> Is primaryNode the means of association, or is it the URI (i.e. /clusters/{cluster_id})
20:46:21 <arborism> > Confused on "Promote a slave node to master"; where is it indicating the promotion action explicitly? Why not /clusters/{cluster_id}/promote?
20:46:33 <arborism> > What's the expected behavior of a resize, delete, restart-db, or restart on /instance/{instance_id}? Block? Forward to /clusters?
20:47:03 <dmakogon_> arborism: flavorRef of each node should be the same
20:47:15 <dmakogon_> it is like BP
20:47:30 <dmakogon_> best-practicies
20:47:30 <arborism> dmakogon_: Not true. We had that discussion awhile ago. You might want a read slave with a beefier profile to handle ad-hoc queries
20:47:41 <imsplitbit> correct
20:48:09 <arborism> I modeled out multiple service types, and arrived at: https://gist.github.com/amcr/96c59a333b72ec973c3a
20:48:17 <arborism> To me, it seems a little easier to grok
20:48:34 <imsplitbit> create should allow you to easily create a cluster of instances of all the same flavor really easily but shouldn't be so inflexible so as to disallow individual flavors
20:49:18 <arborism> imsplitbit: +1
20:49:55 <key2> imsplitbit: +1
20:50:12 <dmakogon_> arborism: i will do it in my way
20:50:20 <imsplitbit> with regard to the notes, I'll need to look through them more and we can discuss further.  but for example, adding a an instance to a cluster is a cluster operation
20:50:29 <imsplitbit> so it would be in /cluster, not /instance
20:50:32 <vipul> so do we need to set up another meeting to discuss clusters?
20:50:34 <dmakogon_> imsplitbit: +1
20:50:41 <imsplitbit> seems like it
20:50:46 <vipul> some of these questions we should send in ML as well
20:50:48 <vipul> mailing list
20:51:04 <vipul> Sounds like we have a big enough audience that IRC isn't going to work for every time zone
20:51:17 <dmakogon_> even if you create a cluster with tha same flavors for each node, you could do a instance_resize, on each of it
20:51:22 <imsplitbit> and it may be that the doc is unclear and questions like this will help us get that fixed
20:51:35 <vipul> arborism: what would you prefer
20:51:48 <arborism> i'm amicable to whatever works for you guys
20:51:48 <vipul> running out of time.. 10 minutes to go
20:51:52 <SlickNik> Yes, we might need to take some of these discussion to the mailing list.
20:52:03 <key2> ML +1
20:52:17 <vipul> arborism: send the email [trove] in subject line :)
20:52:21 <vipul> #topic Flavors per Service Type
20:52:32 <vipul> https://blueprints.launchpad.net/trove/+spec/service-type-filter-on-flavors
20:52:36 <SlickNik> arborism does raise some valid points that I would like to see addressed.
20:52:39 <vipul> so arborism this is somehting you might be interested in as well..
20:52:47 <vipul> if everyone is good with it.. we'll start working on it
20:53:12 <arborism> It is, because I want to shield Trove specific flavors from regular compute provisioning
20:53:18 <arborism> and vice versa
20:53:22 <vipul> k cool
20:53:27 <SlickNik> I'm totally good with it.
20:53:41 <vipul> #topic Trove Conductor
20:53:48 <vipul> who's doing this
20:54:01 <datsun180b> me
20:54:21 <datsun180b> That is, I'm working on implementing a proof of concept first
20:54:43 <datsun180b> I don't have a #link handy but KennethWilke wrote up a rough of what we want
20:54:52 <vipul> datsun180b: i assume you captured some of the comment from today.. about making it optional
20:54:52 <datsun180b> #link https://wiki.openstack.org/wiki/Trove/guest_agent_communication
20:55:15 <datsun180b> if i didn't, i hope eavesdrop did
20:55:57 <grapex> It seems like all step one of this needs to be is to set up a OpenStack daemon that receives RPC calls
20:56:00 <arborism> nice, didn't see this one. we have heartbeats turned off for this very concern.
20:56:07 <grapex> and receives one for the heartbeat, and another for backup status
20:56:09 <datsun180b> grapex: done in poc
20:56:10 <vipul> Ok everyone please look at ^^ as well, we'll have to tlak more about it next meeting
20:56:12 <SlickNik> datsun180b / KennethWilke: reading the wiki on it. Nice explanation!
20:56:14 <grapex> make the guest use that instead of updating the DB
20:56:29 <datsun180b> that's as much as i've got presently though
20:56:30 <vipul> sorry guys.. moving on... time constraint
20:56:38 <vipul> #topic Naming convention / style guide
20:56:45 <imsplitbit> ok real quick
20:56:48 <grapex> vipul: I'm adding an action for datsun180b
20:56:54 <imsplitbit> I've been looking through trove code
20:56:55 <vipul> thanks grapex
20:57:12 <grapex> #action datsun180b to do pull request on phase one of Trove Conductor
20:57:16 <imsplitbit> and I've noticed that the json structures aren't consistent for keys
20:57:24 <imsplitbit> some are camel case and some are underscore
20:57:56 <vipul> the actual request bodies?
20:57:56 <imsplitbit> I just wanted to bring that up and see if we can come to an agreement some point soon on getting some consistency there
20:57:56 <datsun180b> for example, root_enabled ?
20:58:01 <SlickNik> can you link a couple of examples, imsplitbit?
20:58:01 <imsplitbit> sure
20:58:09 <datsun180b> my fault
20:58:11 <arborism> flavorRef vs. service_type
20:58:16 <vipul> ugh..
20:58:36 <vipul> don't know what the openstack stance is on this
20:58:42 <imsplitbit> well
20:58:43 <grapex> Does anyone know if an *official* OpenStack style has popped up int he past few years?
20:58:44 <imsplitbit> yeah
20:58:49 <datsun180b> besides HACKING?
20:58:49 <imsplitbit> we have flavorRef
20:58:53 <imsplitbit> resorePoint
20:58:57 <imsplitbit> and service_type
20:58:57 <grapex> Originally we looked and found swift and nova had different styles in their API
20:58:59 <imsplitbit> root_enabled
20:59:19 <imsplitbit> it was confusing to me having not worked in the api until recently
20:59:21 <datsun180b> I'm to blame for root_enabled! I was young and naive!
20:59:28 <grapex> imsplitbit: It's "flavorRef" as Nova did it that way.
20:59:32 <kevinconway> grapex: all lower case, underscored, and using wingdings characters
20:59:34 <grapex> But it seems like after that
20:59:38 <grapex> we've gone with PEP8 styles
20:59:45 <imsplitbit> right
20:59:50 <grapex> And IIRC Nova is also inconsistent in it's own API
20:59:59 <imsplitbit> and pep8 would smack you upside the head for camelcase
21:00:10 <grapex> So maybe what we do is decided to go forward with PEP8 styled field names in the future
21:00:13 <imsplitbit> I'm not opposed to either
21:00:17 <grapex> and keep the old names around for backwards compatability
21:00:17 <imsplitbit> but I'm opposed to both
21:00:21 <imsplitbit> it looks messy
21:00:23 <datsun180b> i don't know that pep8 governs variable naming, past it must be tokenizable by the parser
21:00:41 <dmakogon_> https://blueprints.launchpad.net/trove/+spec/multi-service-type-support - is gonna be approved ?
21:00:45 <cp16net> yea i didnt think it mattered
21:00:48 <grapex> datsun180b: Keep in mind when I say PEP8, this isn't a code convention, but one for the Rest API- PEP8 won't catch it
21:00:59 <vipul> #action Find json style guide vipul hub_cap grapex
21:01:03 <kevinconway> http://www.python.org/dev/peps/pep-0008/#naming-conventions
21:01:06 <vipul> ok moving on..
21:01:07 <SlickNik> pep8 has no take on the matter.
21:01:11 <vipul> #topic open Discussion
21:01:14 <cp16net> haha gl on finding one :-P
21:01:16 <vipul> continue.. :)
21:01:18 <grapex> So quick question
21:01:35 <dmakogon_> all blueprints are approved for today ?
21:01:39 <grapex> arborism: https://gist.github.com/amcr/96c59a333b72ec973c3a Is the style here of putting different keys for each service type what we're going with?
21:02:04 <vipul> oh.. .
21:02:05 <arborism> No, just a thought experiment of avoiding a /custer api
21:02:06 <datsun180b> guess i'd better reread pep8, i'm getting rusty
21:02:10 <kevinconway> vipul: grapex: http://javascript.crockford.com/code.html closest you might get to a style guide for JS related things
21:02:12 <grapex> arborism: Ok
21:02:20 <vipul> arborism: dmakogon_ this might interest you https://wiki.openstack.org/wiki/Reddwarf-versions-types
21:02:45 <vipul> it is the serviceType api that never got implemented
21:02:56 <grapex> vipul: Thanks
21:03:05 <kevinconway> datsun180b: http://www.pylint.org/
21:03:06 <grapex> One more question - and this may be a can of worms
21:03:13 <vipul> oh noe grapex :)
21:03:16 <grapex> and not the fun kind either that jump out like a gag
21:03:18 <arborism> Is it about users?
21:03:20 <arborism> ;)
21:03:22 <grapex> but the bad kind of worms
21:03:26 <cp16net> doh
21:03:29 <grapex> arborism: Lol
21:03:43 <grapex> So- the reference guest, as it is today- does it block all incoming RPC calls when doing a backup?
21:03:46 <datsun180b> kevinconway: a foolish consistency...
21:03:55 <dmakogon_> vipul: i think we should do that
21:04:13 <kevinconway> datsun180b: meaning if you break consistency ever
21:04:17 <kevinconway> for any reason
21:04:20 <vipul> So i don't think we do... since it's asynchronous
21:04:24 <SlickNik> grapex: I think it works on one message at a time.
21:04:38 <vipul> SlickNik: that is correct.. but create backup isn't sync
21:04:39 <grapex> I ask this because we need the guest to report on the size of the volume even while doing a backup.
21:04:52 <cp16net> vipul: so its in its own thread when runnning a backup?
21:05:07 <vipul> we do spawn a subprocess to take the backup
21:05:16 <vipul> which probably doesn't block
21:05:19 <vipul> i could be wrong
21:05:23 <SlickNik> we spawn a subprocess, yes.
21:05:27 <grapex> vipul: Ok, so then it can get other RPC calls as it's taking the backup?
21:05:31 <grapex> Just checking
21:05:46 <vipul> I _think_ so.. i'd have to get back to you
21:05:53 <grapex> By the way this is what Sneaky Pete does in case it's not obvious what I'm driving at. Some of the questions in the meeting today made me think otherwise.
21:06:01 <grapex> If it doesn't we can work to change the reference guest
21:06:04 <grapex> vipul: Ok
21:06:29 <vipul> i woudl be great to block though :P
21:06:41 <vipul> so we could do things like upgrade.. and mke sure upgrade only happens after backup is finished
21:07:13 <grapex> vipul: There are ways around that, like using the trove code that currently checks to see if an action is being performed.
21:07:23 <grapex> But that's back to the discussion we already finished up.
21:07:29 <grapex> Since we're past time
21:07:41 <vipul> alrighty.. calling it done
21:07:45 <vipul> #endmeeting