20:00:07 #startmeeting trove 20:00:07 Meeting started Wed Aug 21 20:00:07 2013 UTC and is due to finish in 60 minutes. The chair is vipul. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:10 The meeting name has been set to 'trove' 20:00:17 howdy 20:00:18 o/ 20:00:22 o/ 20:00:24 hi 2 all 20:00:25 o/ 20:00:32 o/ 20:00:48 o/ 20:00:49 o/ 20:00:59 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 20:01:08 o/ 20:01:10 \-\0/-/ 20:01:12 o/ 20:01:27 #topic action items 20:01:30 o^/ 20:01:35 o/ 20:01:43 imsplitbit: first one is you 20:01:48 imsplitbit to move his clustering reviews to a feature branch 20:02:03 o/ 20:02:03 yeah I've got the clustertype stuff out there for review 20:02:13 i assume this was for the cluster api itself maybe 20:02:14 ? 20:02:18 I've created a feature branch and I'm moving my current work for cluster api into it 20:02:21 o7 20:02:21 yes 20:02:24 cool 20:02:25 I haven't pushed it up yet 20:02:37 can it be extended ? 20:02:37 I would love some feedback on the clustertype 20:02:48 both for trove and trove client 20:02:51 #action imsplitbit to push up cluster api to feature branch 20:03:02 imsplitbit: Yea i've been meaning to find some time 20:03:07 i'll look this week. 20:03:22 Same here 20:03:26 i did browse over it though... 20:03:27 kk thx! 20:03:42 next item was hub_cap.. 20:03:45 i guess we skip him 20:03:50 #action hub_cap to find out what happens w/ feature based reviews that land after FF 20:03:58 next.. SlickNick 20:04:02 Yeah, I made a couple of changes to the devstack review. 20:04:12 But I have to yet make the default role change 20:04:22 So I'm going to action this one again for myself. 20:04:23 #action SlickNik update devstack review to add role to default devstack users. 20:04:27 cool.. 20:04:29 thanks! 20:04:36 and that is all for actions 20:04:45 doh! meant to comment on cluster stuff, was afk for a minute or so :X 20:04:46 #topic Automated Backups Design 20:04:57 arborism: that'll be after this topic 20:05:05 cp16net: wanna take it away? 20:05:13 yes... thx 20:05:28 so this name i think is a little off now that i have more defined 20:05:33 its more about scheduled tasks 20:05:45 #link https://wiki.openstack.org/wiki/Trove/scheduled-tasks 20:06:02 this will require a scheduler that will send messages to the guests to rn the tasks 20:06:24 these tasks could be anything 20:06:44 but initially we will have backups 20:06:59 this could be extended to allowing updates to pacakges for the customer 20:07:01 theoretically, the guest upgrade blueprint could find some usefulness as a scheduled task as well, no? 20:07:12 like mysql/guest or other packages 20:07:29 yep i would think we'd be able to do that 20:07:30 it surely can 20:07:37 niiiiice 20:07:51 @cp16net: will the guest be agnostic about these tasks? 20:08:28 SlickNik: the guest will be able to handle them 20:09:00 the idea is that the scheudler will send a task to the guest to act on 20:09:10 that guest will complete that task and report back on it 20:09:19 cp16net: Sorry if I was a bit unclear. Meant to ask whether the guest would know the difference between whether a task was part of a schedule or not? 20:09:20 that its complete and such 20:09:28 Or does it look like just another task to the gues 20:09:31 cp16net: so what does it mean for a maintenance window 20:09:34 guest* 20:09:44 like does the guest know not to accept any more 'tasks' ? 20:10:04 SlickNik: i think it would be able to tell if it was a scheudled 20:10:16 cp16net: What would the distinction be? 20:10:28 because the guest needs to report back saying that the task it was given is complete 20:10:28 Or rather, what value does having that distinction give us? 20:10:55 Seems like a typical "call". Or a longer running task issued via "cast" which the guest then updates Trove on 20:11:03 vipul: the maintenance window is decalared by the customer of when they would want these schedules to run 20:11:05 So then do we need a separate scheduler component? Or can the guest just run the task based on the schedule of the task? 20:11:10 which currently happens through the database, but will possibly be over RPC in the future via conductor or something. 20:11:33 kinda agree with grapex.. seems unnecessary for the guest to be aware or differentiate how an request to it originated 20:11:41 grapex: this is a good point that we need bidirectional comm between the guest and system 20:11:51 i was thining that the conductor could handle some of this 20:11:56 SlickNik: The guest has to schedule stuff it makes it harder to manage if things begin to fail or die 20:12:06 cp16net: Possibly 20:12:08 but that is just a dream atm 20:12:10 but for this conversation 20:12:22 let's assume that the guest has a decent way to pass back info to Trove 20:12:35 cp16net: I have a feeling the first phase of conductor won't take too long 20:12:36 SlickNik: the idea i have is that theere is a new service running as the scheduler 20:12:50 i thinks the best way it to pass data through DB 20:13:11 dmakogon_: that's how it's done now 20:13:19 grapex: yes i agree that because its just a dream i have 20:13:23 dmakogon_: Maybe- we could create a strategy for using the DB still if people want to- 20:13:35 yea it should be configuratable i suppose 20:13:35 let's save the talk of sending messages back on the guest for when we discuss Conductor later 20:13:45 have the guest agent have different stragetys? 20:13:53 ew.. spellign 20:14:17 let's table that for now.. talk about automated tasks now 20:14:28 and choosing strategy would be configurable ?? 20:14:29 a maintentenance window to me seems like a time the guest should not be able to do things that may be long running 20:14:30 so lets bring this back to the scheduled task 20:14:49 like i want to upgrade my guest agent during a time window.. 20:14:50 its going to handle scheudling a task on behalf of the customer 20:15:03 so i can be sure a backup isn't runnign when i take it down 20:15:06 cp16net: I'm still working through the pro's and cons of having a separate scheduler. 20:15:20 vipul: Ah 20:15:34 Well, we already have code to see if an action in trove can be performed 20:15:44 scheduler should be done like it done in nova 20:15:50 or am i wrong ? 20:16:07 nova scheduler is something different. 20:16:09 grapex: Yea i'd wnat to extend that to take the maintenance window into account i susppose 20:16:10 dmakogon_: thats the idea 20:16:15 vipul: What may be possible is to query to see if the guest or instance is ready at routine intervals and then upgrade if possible 20:16:39 dmakogon_: I think we should take whatever is applicable from Nova 20:16:55 grapex / dmakogon_: I don't think that nova does time based scheduling. 20:16:57 grapex: that could work as well 20:17:10 dmakogon_: Can you create the blueprint? 20:17:13 SlickNik: agreed, everything time based just is a periodic task within exisitng services 20:17:26 So 20:17:43 key2: i could do that 20:17:43 cp16net: do you see a distinction between what you propose and these time based calls to the guest? 20:18:12 key2 dmakogon_: I'm not sure if there's a dictinction between what you're suggesting and the scheduled tasks blueprint 20:18:13 grapex: cp16net i do see one diff... nova doens't take inot account a user's specified time 20:18:13 i'm a little confused between what the question i am reading 20:18:15 why can't the scheduled task info just be stored on the guest. 20:18:31 Because containers aren't reliable for keeping time 20:18:32 oh i see what you mean 20:18:44 And the guest can decide when to run based on a periodic task, this time based info, and maintenance window info. 20:18:45 and we don't want the guest to grow larger and larger having to track things 20:18:45 SlickNik: Let's say the guest dies- 20:18:53 we dont want to make the guest any more complicated 20:19:02 we rather keep it in the infra 20:19:04 Well, if the guest dies it can't run anything anyway. 20:19:06 SlickNik: Maybe the guest could send back in it's heart beat if it has resources on hand to perform tasks 20:19:21 this needs to be able to handle different senarios 20:19:25 you'd have to give the guest access to the database 20:19:37 where will it configure itself from 20:19:44 NOOOOOOOOOOOOOOO 20:19:47 :) 20:20:02 but we should look at simplifying this.. it may very well be create a crontab entry on the guest 20:20:18 but what drives that might be some trove service 20:20:19 one of the ideas here is that its plugable with diffrernt strategies 20:20:29 well. let's consider how we different from Nova in terms of requirements to the module 20:20:33 yeah I think having a dedicated scheduler makes sense. 20:20:42 A separate scheduler is a single point of failure for jobs across multiple guests. 20:20:44 So the issue is do we want to make the guest store this information on scheduling and also have to be in charge of cron 20:20:51 vipul: i think thats a bad idea to have cron running on the geust 20:20:52 vipul: wouldn't a cron schedule on the guest make it harder to implement the scheduler pause you want to introduce for maintenance windows? 20:21:03 it should be centalized 20:21:14 (and clusterable) 20:21:25 redthrux: +1 20:21:27 if it is centralized then clusterable is a requirement 20:21:30 kevinconway: point.. 20:21:34 redthrux: +1 20:21:52 I desperately want Cassandra )) 20:22:01 SlickNik: not necessarily 20:22:12 Cassandra is easy clusterable 20:22:14 cp16net: you can't really control running _only_ in maintenance windows if it's centralized. 20:22:29 key2: but the point is scheduler for now 20:22:34 SlickNik: Is it because the central point wouldn't know if the maintenance window is happening? 20:22:34 SlickNik: its the only way you can if the customer is to define when the window is 20:22:36 You don't know if the guest / network is down when you schedule the cast. 20:22:37 well lets not get bogged down too heavy in impl details 20:22:52 SlickNik: We have heart beats though, so we should know 20:22:55 SlickNik: there is an api that the customer can define what the window will be 20:22:58 So when the guest picks the message up, it _might_ well be out of the window. 20:23:07 dmakogon_: I mean we should keep clustering in mind 20:23:25 key2: yes, it's really true! 20:23:27 SlickNik: I am expecting the latency to be short enough that won't be a problem 20:23:30 You should only send messages to services that are up 20:23:31 Maybe I'm assuming too much 20:23:40 SlickNik: you bring up a good point tho 20:23:50 SlickNik: there has to be a fuzzy window 20:23:50 So what if you send one message after another. 20:23:54 Guest is up 20:23:56 SlickNik: I know at Rack, the latency of messages isn't more than a second or so 20:23:58 it can not be exact 20:23:58 grapex: could always treat it like medication. take as soon as possible or just the next dose. whichever is soonest. 20:24:14 But the first action takes a long time to complete, so that when the guest picks the second message, it's out of the window. 20:24:37 SlickNik: Maybe what's needed are TTLs and gauranteed delivery 20:24:46 You can't guarantee maintenance windows unless you build that logic _into_ the guest. 20:24:51 That's logic that can be built into the scheduling hting to only send it the second message after you know the first maintennace task is done 20:24:52 So if the scheduler makes the guest do something, and it doesn't, the request is cancelled 20:25:10 grapex: that's assuming syncronous 20:25:19 SlickNik: There could also be time windows sent- so the guest would know if it's past the given Window don't bother 20:25:23 +1 grapex - easily done with messages in if you are using rabbitmq 20:25:29 it seems like the scheduler component will need to keep task status in mind.. and if it does you can solve for these things 20:25:29 Vipul, what if the guest is already taking a backup based on a user call. 20:25:36 could scheduled tasks come with an expiry time where the guest will refuse it with knowledge that another task is coming? 20:25:41 that's somethign the scheduler shoudl be aware 20:25:46 you shoudln't blindly schedule things 20:25:46 vipul: grapex: SlickNik: should we have a meeting outisde of this meeting on this? 20:25:53 whether you're in a maintenance window or not 20:25:57 cp16net: yes 20:26:01 Yes, please 20:26:05 cp16net: ^^^ 20:26:08 moving on.. evryone.. look over the Design please 20:26:08 cp16net: yes 20:26:18 let's try to meet again 20:26:22 this week or next? 20:26:23 i'd gladly talk more but we do have more to talk about orry 20:26:30 this week perferably 20:26:39 let's throw out some times 20:26:53 tomorrow at 2PST? 20:27:04 we could chat tmorrow at this same time 20:27:16 Works for me 20:27:16 I can't make that but don't hold it up on me 20:27:20 3cst? 20:27:28 ok 1pst tomorrow 3cst 20:27:31 done 20:27:45 #topic Trove API section for DB type selection 20:27:59 imsplitbit: is this you? 20:28:10 no 20:28:14 me 20:28:21 go dmakogon_ 20:28:48 what the idea, we should provide specific chose to user for service type 20:29:23 it could be setted by config, or be stored at DB and be manually added to it 20:29:57 service_type is something that we allow user to specify 20:30:00 in the create instance call 20:30:09 in this way every one could extend initail configs for trove and build custom images with differernt DBs 20:30:13 with that, today you can support >1 type of db in trove 20:30:23 for now it specifies only one type 20:30:30 it's not normal 20:30:34 in the config? yes 20:30:51 dmakogon_: the trove config only specifies the _default_ service type. 20:30:54 trove should do it's dinamically 20:30:57 that becomes the default service_type ... BUT if you had another entry in service_images it would honor that 20:31:32 You can still have other service types that you can explicitly pick during the instance create call. 20:31:36 One thing to consider, is that given you'll likely want specific flavors for different service_types, how do you guarantee such affinity? You could let the flavor drive the service_type (e.g. mysql.xl)... 20:31:41 my point is to extend API for adding new dinamic parameter 20:31:47 service_type 20:31:58 arborism: https://blueprints.launchpad.net/trove/+spec/service-type-filter-on-flavors 20:32:00 and it's should be done 20:32:03 arborism: good point. there's a later topic scheduled to discuss that very thing ^^^ 20:32:18 arborism: I'd rather have some new API resource analogous to Nova images that let a user enumerate service types. 20:32:19 dmakogon_: so a management API 20:32:29 dmakogon_: could there be an extension for this? 20:32:31 yes! grapex we disucssed this a while ago 20:32:34 grapex: +1 20:32:36 GET /servicetypes 20:32:54 vipul: Sorry... good we agree though. :) 20:33:00 cp16net: i think we could do that 20:33:09 I like that idea, grapex / vipul 20:33:23 grapex: no worries.. i just mean we should revive that discussion 20:33:31 it was proposed by demorris a while ago.. then sort of died 20:33:45 but since ther is a lot of itnerest in support >1 service type.. we kinda need the API 20:33:48 it wont die 20:34:10 dmakogon_: So i think what you want is a management api to add new service types.. please file a BP 20:34:11 i will try to make a propose at wiki 20:34:20 ok 20:34:20 grapex: so that the maintainer could disable types or set one as default? 20:34:39 done? moving on 20:34:46 good with it 20:34:49 #topic clustering API update 20:34:52 cp16net: Sure 20:34:54 dmakogon_: While spec'ing, can you also take into consideration versioning? i.e. mysql-5.5 vs. 5.6 20:35:11 arborism: ok 20:35:24 dmakogon_: you wanted to chime in here? 20:35:47 arborism: nova images handle the same thing 20:35:48 arborism: anything else beside version? 20:35:50 but it although makes API more complicated 20:36:08 ubuntu 12 vs ubuntu 13 are just different images 20:36:48 anyone have anythign to say about clusterin API? 20:36:53 I wasn't advocating for anything, I was just mentioning that while writing out the specs, to consider the implications of wanting multiple versions of a service_type available. How it's impl'd/handled is up in the air. 20:36:57 vipul: Yes 20:36:57 kevinconway: should you propose still image for it 20:37:20 arborism: dmakogon_: i'll try to find an old wiki for it 20:37:21 vipul: i have 20:37:23 So given the API Ref, I'm not sure I see how it will work in the future w/ the inevitable parameter groups and region awareness requirements 20:37:31 (regarding Clustering API) 20:37:34 go for it.. 20:38:08 if, in future, trove will have multiple service support 20:38:46 yes.. 20:38:50 we sould build flexible clustering API that will be applicapable for all 20:38:55 sorry for slow typing 20:39:02 no worries :) 20:39:06 I believe that's what we're attempting to do 20:39:15 we're trying to make it as open as possible 20:39:30 that is why, although Trove API(single-node) should be changed 20:39:46 #link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API-Using-Instances 20:39:56 because of alot of NoSQL doesn't support ACL at db-layer 20:40:02 dmakogon_: after lookign at that proposal.. do you think there are things that don't fit with other dbs? 20:40:11 vipul: redis 20:40:15 cassandra 20:40:16 arborism: :P 20:40:26 alot of dbs 20:40:29 are we going to bring up users again 20:40:29 where? 20:40:31 how? 20:40:36 lol 20:40:39 please god no 20:40:39 can i elaborate on redis? 20:40:42 sure 20:40:44 even i don't want to talk about users anymore 20:40:44 without users ;) 20:40:53 please proceed 20:40:59 Say I have 3 DCs, and I want a Redis Master in each 20:41:04 we don't need users in NoSQL 20:41:06 I'll use consistent hashing client side 20:41:10 to pick 20:41:25 How, with the clustering api, will I be able to add a read slave 20:41:36 picking whether I want to daisy chain, or connect directly to master 20:41:47 plus, choose the ability to accept explicit writes on a slave (aka readonly) 20:42:07 I am not sure where in the clustering api it doesn't allow you to do that 20:42:07 Because as a whole, I'd logically consider the entire deployment a cluster, but with the api spec 20:42:27 There's no "nodeType" 20:42:30 only cluster type 20:42:47 i think that's what the clusterConfig is for 20:42:58 you specify the primary.. at least in that example 20:43:08 we may need to extend what goes in there based on service type 20:43:11 well, read replica uses primary, add a node doesn't 20:43:16 there is a concept of roles within thee clustering api 20:43:47 vipul: but doesn't clusterConfig end up becoming a parameter group? 20:44:29 arborism: can you explain what you mean by parameter group? 20:44:38 (or link to something that does, please) 20:44:45 e.g. key-value-pairs related to the service_type 20:44:50 a la, conf pairs 20:45:14 arborism: i guess it would become that if we allowed it to change 20:45:42 maybe what we need is a concrete API example that will do what you want.. 20:45:48 Let me paste out a couple of things I wrote, then add a quick comment: 20:45:56 > In "Create Replication Set: (No previous db instance, fresh)", should be able to specify flavorRef per Node. 20:45:59 and we should compare that with what we've come up with so far 20:46:06 > "Create Replication Set" is a POST to /clusters, but "Add Node" is a PUT to /clusters/{cluster_id}/nodes, this seems inconsistent. 20:46:07 Is primaryNode the means of association, or is it the URI (i.e. /clusters/{cluster_id}) 20:46:21 > Confused on "Promote a slave node to master"; where is it indicating the promotion action explicitly? Why not /clusters/{cluster_id}/promote? 20:46:33 > What's the expected behavior of a resize, delete, restart-db, or restart on /instance/{instance_id}? Block? Forward to /clusters? 20:47:03 arborism: flavorRef of each node should be the same 20:47:15 it is like BP 20:47:30 best-practicies 20:47:30 dmakogon_: Not true. We had that discussion awhile ago. You might want a read slave with a beefier profile to handle ad-hoc queries 20:47:41 correct 20:48:09 I modeled out multiple service types, and arrived at: https://gist.github.com/amcr/96c59a333b72ec973c3a 20:48:17 To me, it seems a little easier to grok 20:48:34 create should allow you to easily create a cluster of instances of all the same flavor really easily but shouldn't be so inflexible so as to disallow individual flavors 20:49:18 imsplitbit: +1 20:49:55 imsplitbit: +1 20:50:12 arborism: i will do it in my way 20:50:20 with regard to the notes, I'll need to look through them more and we can discuss further. but for example, adding a an instance to a cluster is a cluster operation 20:50:29 so it would be in /cluster, not /instance 20:50:32 so do we need to set up another meeting to discuss clusters? 20:50:34 imsplitbit: +1 20:50:41 seems like it 20:50:46 some of these questions we should send in ML as well 20:50:48 mailing list 20:51:04 Sounds like we have a big enough audience that IRC isn't going to work for every time zone 20:51:17 even if you create a cluster with tha same flavors for each node, you could do a instance_resize, on each of it 20:51:22 and it may be that the doc is unclear and questions like this will help us get that fixed 20:51:35 arborism: what would you prefer 20:51:48 i'm amicable to whatever works for you guys 20:51:48 running out of time.. 10 minutes to go 20:51:52 Yes, we might need to take some of these discussion to the mailing list. 20:52:03 ML +1 20:52:17 arborism: send the email [trove] in subject line :) 20:52:21 #topic Flavors per Service Type 20:52:32 https://blueprints.launchpad.net/trove/+spec/service-type-filter-on-flavors 20:52:36 arborism does raise some valid points that I would like to see addressed. 20:52:39 so arborism this is somehting you might be interested in as well.. 20:52:47 if everyone is good with it.. we'll start working on it 20:53:12 It is, because I want to shield Trove specific flavors from regular compute provisioning 20:53:18 and vice versa 20:53:22 k cool 20:53:27 I'm totally good with it. 20:53:41 #topic Trove Conductor 20:53:48 who's doing this 20:54:01 me 20:54:21 That is, I'm working on implementing a proof of concept first 20:54:43 I don't have a #link handy but KennethWilke wrote up a rough of what we want 20:54:52 datsun180b: i assume you captured some of the comment from today.. about making it optional 20:54:52 #link https://wiki.openstack.org/wiki/Trove/guest_agent_communication 20:55:15 if i didn't, i hope eavesdrop did 20:55:57 It seems like all step one of this needs to be is to set up a OpenStack daemon that receives RPC calls 20:56:00 nice, didn't see this one. we have heartbeats turned off for this very concern. 20:56:07 and receives one for the heartbeat, and another for backup status 20:56:09 grapex: done in poc 20:56:10 Ok everyone please look at ^^ as well, we'll have to tlak more about it next meeting 20:56:12 datsun180b / KennethWilke: reading the wiki on it. Nice explanation! 20:56:14 make the guest use that instead of updating the DB 20:56:29 that's as much as i've got presently though 20:56:30 sorry guys.. moving on... time constraint 20:56:38 #topic Naming convention / style guide 20:56:45 ok real quick 20:56:48 vipul: I'm adding an action for datsun180b 20:56:54 I've been looking through trove code 20:56:55 thanks grapex 20:57:12 #action datsun180b to do pull request on phase one of Trove Conductor 20:57:16 and I've noticed that the json structures aren't consistent for keys 20:57:24 some are camel case and some are underscore 20:57:56 the actual request bodies? 20:57:56 I just wanted to bring that up and see if we can come to an agreement some point soon on getting some consistency there 20:57:56 for example, root_enabled ? 20:58:01 can you link a couple of examples, imsplitbit? 20:58:01 sure 20:58:09 my fault 20:58:11 flavorRef vs. service_type 20:58:16 ugh.. 20:58:36 don't know what the openstack stance is on this 20:58:42 well 20:58:43 Does anyone know if an *official* OpenStack style has popped up int he past few years? 20:58:44 yeah 20:58:49 besides HACKING? 20:58:49 we have flavorRef 20:58:53 resorePoint 20:58:57 and service_type 20:58:57 Originally we looked and found swift and nova had different styles in their API 20:58:59 root_enabled 20:59:19 it was confusing to me having not worked in the api until recently 20:59:21 I'm to blame for root_enabled! I was young and naive! 20:59:28 imsplitbit: It's "flavorRef" as Nova did it that way. 20:59:32 grapex: all lower case, underscored, and using wingdings characters 20:59:34 But it seems like after that 20:59:38 we've gone with PEP8 styles 20:59:45 right 20:59:50 And IIRC Nova is also inconsistent in it's own API 20:59:59 and pep8 would smack you upside the head for camelcase 21:00:10 So maybe what we do is decided to go forward with PEP8 styled field names in the future 21:00:13 I'm not opposed to either 21:00:17 and keep the old names around for backwards compatability 21:00:17 but I'm opposed to both 21:00:21 it looks messy 21:00:23 i don't know that pep8 governs variable naming, past it must be tokenizable by the parser 21:00:41 https://blueprints.launchpad.net/trove/+spec/multi-service-type-support - is gonna be approved ? 21:00:45 yea i didnt think it mattered 21:00:48 datsun180b: Keep in mind when I say PEP8, this isn't a code convention, but one for the Rest API- PEP8 won't catch it 21:00:59 #action Find json style guide vipul hub_cap grapex 21:01:03 http://www.python.org/dev/peps/pep-0008/#naming-conventions 21:01:06 ok moving on.. 21:01:07 pep8 has no take on the matter. 21:01:11 #topic open Discussion 21:01:14 haha gl on finding one :-P 21:01:16 continue.. :) 21:01:18 So quick question 21:01:35 all blueprints are approved for today ? 21:01:39 arborism: https://gist.github.com/amcr/96c59a333b72ec973c3a Is the style here of putting different keys for each service type what we're going with? 21:02:04 oh.. . 21:02:05 No, just a thought experiment of avoiding a /custer api 21:02:06 guess i'd better reread pep8, i'm getting rusty 21:02:10 vipul: grapex: http://javascript.crockford.com/code.html closest you might get to a style guide for JS related things 21:02:12 arborism: Ok 21:02:20 arborism: dmakogon_ this might interest you https://wiki.openstack.org/wiki/Reddwarf-versions-types 21:02:45 it is the serviceType api that never got implemented 21:02:56 vipul: Thanks 21:03:05 datsun180b: http://www.pylint.org/ 21:03:06 One more question - and this may be a can of worms 21:03:13 oh noe grapex :) 21:03:16 and not the fun kind either that jump out like a gag 21:03:18 Is it about users? 21:03:20 ;) 21:03:22 but the bad kind of worms 21:03:26 doh 21:03:29 arborism: Lol 21:03:43 So- the reference guest, as it is today- does it block all incoming RPC calls when doing a backup? 21:03:46 kevinconway: a foolish consistency... 21:03:55 vipul: i think we should do that 21:04:13 datsun180b: meaning if you break consistency ever 21:04:17 for any reason 21:04:20 So i don't think we do... since it's asynchronous 21:04:24 grapex: I think it works on one message at a time. 21:04:38 SlickNik: that is correct.. but create backup isn't sync 21:04:39 I ask this because we need the guest to report on the size of the volume even while doing a backup. 21:04:52 vipul: so its in its own thread when runnning a backup? 21:05:07 we do spawn a subprocess to take the backup 21:05:16 which probably doesn't block 21:05:19 i could be wrong 21:05:23 we spawn a subprocess, yes. 21:05:27 vipul: Ok, so then it can get other RPC calls as it's taking the backup? 21:05:31 Just checking 21:05:46 I _think_ so.. i'd have to get back to you 21:05:53 By the way this is what Sneaky Pete does in case it's not obvious what I'm driving at. Some of the questions in the meeting today made me think otherwise. 21:06:01 If it doesn't we can work to change the reference guest 21:06:04 vipul: Ok 21:06:29 i woudl be great to block though :P 21:06:41 so we could do things like upgrade.. and mke sure upgrade only happens after backup is finished 21:07:13 vipul: There are ways around that, like using the trove code that currently checks to see if an action is being performed. 21:07:23 But that's back to the discussion we already finished up. 21:07:29 Since we're past time 21:07:41 alrighty.. calling it done 21:07:45 #endmeeting