20:00:32 #startmeeting trove 20:00:37 Meeting started Wed Aug 7 20:00:32 2013 UTC and is due to finish in 60 minutes. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:40 The meeting name has been set to 'trove' 20:00:44 hello! 20:00:51 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 20:00:56 moving my desk inside brb 20:01:03 hi guys 20:01:11 #help 20:01:36 hi ashestakov 20:01:43 welcome 20:02:03 got a packed meeting today 20:02:08 lets get started w/ last wks action items 20:02:23 o/ 20:02:27 #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-31-20.00.html 20:02:37 ^ ^ not updated on teh meeting page, so use this link 20:02:55 only one AI, vipul get Nik 20:03:01 #topic action items 20:03:16 SlickNiTABTAB 20:03:16 oh i know this one 20:03:20 exactly 20:03:28 so, ive added teh core team to the -ptl group 20:03:30 i holla'd at him 20:03:34 so we can all upload to pypi 20:03:44 and i uploaded a new tag to pypi 20:03:53 as per mordreds request 20:04:11 sorry, running a bit late. 20:04:12 so SlickNik grapex vipul you can all tag 20:04:28 thanks for explaining the tagging situation hub_cap 20:04:32 hi 20:04:34 np! 20:04:37 hi adrian_otto 20:04:38 #topic clustering api update 20:04:40 hello 20:04:43 imsplitbit: anything to report here? 20:04:44 yay! 20:04:47 well sure 20:04:52 GO GO GO 20:04:52 I got hung up on testing 20:04:59 hi 20:05:01 but got something worked out and have unittests done 20:05:09 awersome 20:05:14 I started working on adding clustertypes to troveclient 20:05:21 this is all just for clustertypes btw 20:05:21 #link https://wiki.openstack.org/wiki/GerritJenkinsGithub#Tagging_a_Release 20:05:22 sorry 20:05:30 but yeah I'm close to having that done 20:05:33 cool. good first steps 20:05:37 should be today or tomorrow 20:05:41 cool looking forward to it imsplitbit 20:05:54 great work. maybe consider pushing a review for us to look at? 20:06:00 once the clustertypes is done 20:06:07 as soon as I have client 20:06:10 I will push both 20:06:15 yes perfect 20:06:17 that way they can be reviewed/tested 20:06:25 okey. moving on if no questions 20:06:47 #topic docstring rules 20:06:54 so SlickNik did some great work to get developer docs 20:06:58 SlickNik: can u link em 20:07:11 but he mentioned that we arent adding our module docs, cuz frankly 20:07:14 they suck 20:07:25 #link http://docs.openstack.org/developer/trove/ 20:07:29 so we brainstormed on adding a soft rule to our reviews 20:07:49 1) new methods must have a docstring (unles tey are like, 1 liner @properties, use good judgement) 20:08:02 2) if you mod an existing method, add a docstring to it, but dont go doc'ing up the whole module 20:08:18 does that sound fair to everyone? 20:08:22 hub_cap: Should doc'ing up the whole module be in a seperate PR? 20:08:23 i am in favor of this 20:08:29 grapex: id think so 20:08:31 hub_cap: Sounds good. 20:08:40 if you _want_ to go doc up a module, do it, for sure 20:08:41 works for me 20:08:46 but not _in_ another review 20:08:53 too much for one review id think 20:08:55 hub_cap: will there be any soft rules around pep8 or pyflakes validation? 20:09:15 arent those hard rules already? 20:09:19 hub_cap: should arguments for functions and such be doc'd for sphinx? 20:09:26 and are there any rules on that 20:09:27 the jenkins builds run pep8 20:09:32 pyflakes for sure has a lot of errors ignored 20:09:35 KennethWilke: Good point. 20:09:47 i think thats fair KennethWilke 20:09:55 kevinconway: you mean the ones ignored in tox.ini? 20:09:55 I vote we use sphinx like doc strings. 20:10:00 hub_cap: kevinconway: if you use flake8 you get both 20:10:08 most other projects have switched 20:10:15 earthpiper: we would have to. The documentation is built using sphinx. 20:10:25 we are using flake8 for our [...:pep8] thing in tox.ini 20:10:29 clarkb: ^ ^ 20:10:45 tox -epep8 just calls flake8 20:10:47 #link https://github.com/openstack/trove/blob/master/tox.ini 20:10:47 perfect 20:10:55 i only ask because pyflakes enforces their style guide which is a superset of PEP8 20:11:07 curious if we have any desire to use that or if we just need PEP8 20:11:13 are we ignoring _more_ than other projects? 20:11:41 id prefer we go w/ the other projects just cuz i dont want someone whos worked on another project ccome along and get failures cuz our flake setup is diff 20:11:58 if we are ignoring more, then we need to add some tasks to fix them in the codebase tho 20:12:07 hub_cap: Yes, that would be awfully flakey of us (no pun intended) 20:12:14 ;) grapex 20:12:31 i think its a fair point tho kevinconway and lets take it to the chat room to discuss examples in detail 20:12:34 sound good? 20:12:46 I think that sounds good. 20:12:47 cuz we might rule differently on some 20:12:48 that sounds good 20:12:50 word 20:12:56 #action DOC DOC DOC 20:13:00 ;) 20:13:23 #topic NRDB ammend to trove 20:13:31 just wanted ot say that the TC ruled on the topic 20:13:33 and we are good! 20:13:37 weee 20:13:37 YEA! 20:13:37 It would also be nice if someone could compare the lists of errors that we're ignoring vs what a couple of other projects are ignoring as examples. 20:13:40 \o/ 20:13:46 +1 SlickNik 20:13:48 Hell yea! 20:13:51 and someone blogged about it! 20:14:00 vipul: link it ;) 20:14:09 #link v 20:14:10 http://www.zerobanana.com/archive/2013/08/07 20:14:15 you already seen 20:14:28 also, i ammended the mission to add the word 'provisioning' as per the TC's request 20:14:36 i know i just wanted others to see ;) 20:14:47 hub_cap: :) 20:14:48 sounds good 20:14:52 hub_cap: Nice. 20:15:06 ok now for the fun part!! 20:15:16 first lets discuss rpm integration 20:15:19 #topic rpm integration 20:15:27 ashestakov: care to comment on this? 20:15:55 hub_cap: i just finished redhat class and tested it on fedora and centos 20:16:08 wonderful!! 20:16:18 can i commit it to this change https://review.openstack.org/#/c/36337/ ? 20:16:21 safe to assume we can expect a review somewhat soon? 20:16:41 hmm that might be fair since its mostly reviewed already 20:16:43 want to take it over 20:16:44 ? 20:16:55 anyone opposed to that? vipul SlickNik grapex ? 20:16:59 for ashestakov to take it over 20:17:08 also ashestakov woudl u like to introduce yourself? so people know you 20:17:22 hub_cap: I'm fine with it, although it may be easier for ashestakov if what's there got merged first. 20:17:28 I can take a look 20:17:29 Seemed pretty close. 20:17:45 oh i can retract my -1 if adding a test isn't worth it 20:17:49 it _is_ but the rhel impl is a complete waste 20:17:55 in that review as it is 20:18:00 its just pass lol 20:18:10 vipul: well its hard ot do that w/o just faking the existence of the files 20:18:18 which then is just validating that the Base stuff works, which it does 20:18:27 whats funny is that py26 is run on a centos machine 20:18:34 but i finished only pkg things, still have distributive specific thigs in mysql_service 20:18:37 so it was failing at first because it was grabbing the rhel manager lol 20:18:53 ok ashestakov lets merge that review then 20:18:59 and then you can create a new review 20:19:09 ill talk to vipul about merging it today so you can make progress 20:19:16 cool beans 20:19:35 ashestakov: we never got an intro :) 20:19:44 id LOVE to see rpm integration before h3 is cut. that will be awesome! 20:20:41 I'm for merging this piece in first so that we don't have a gargantuan review later. 20:20:47 +1 20:20:50 ok moving on? 20:21:02 hub_cap: still not clear there trove/guestagent/manager/mysql_service.py, have i add if/else to detect what tool use to enable mysql onboot? 20:21:26 yes thats a valid quesiton i dont think we answered. can we chat about it in #openstack-trove after the weekly meeting? 20:21:45 ok 20:21:51 and thank you for your work getting rpm stuff working ashestakov 20:21:59 NOW The fun part, more ashestakov talking! 20:22:04 #topic new blueprints 20:22:19 #link https://blueprints.launchpad.net/trove/+spec/guest-config-through-metadata 20:22:23 lets start with this 20:22:34 i feel like this is straightforward. 20:22:48 so i suggests to push trove-guestagent.conf though metadata, like guest_info 20:22:53 the guest config can be written to metadata server, and pulled down on install 20:22:58 file injection? 20:23:05 vipul: yes 20:23:08 so metadata != file injection 20:23:10 ok 20:23:25 so the taskmgr sends /both/ configs down 20:23:25 I like it, would like to know more about it 20:23:49 i think thats a fair point. we might even be able to template it 20:23:50 actually its question to discuss how to push it 20:24:12 do you mean whether we push it via the metadata service, or thru file_injection? 20:24:13 A good thing to think about also is what gives the guest it's identity. 20:24:21 Now it looks in the config file which has the instance ID 20:24:39 tahts already pushed thru file injection grapex 20:24:42 in the past it would make a call to hostname, and then talk back to the central database on startup to determine what's ID was... which was pretty goofy. :p 20:24:55 yes it was lol ;) 20:24:58 but it worked!! 20:25:04 Sorry, you said metadata != file injection and I read it was we'd be replacing file injection. 20:25:07 N/m 20:25:16 is that what you are asking ashestakov? 20:25:18 is this similar to the config drive stuff? 20:25:27 whether to use metadata server vs file injection? 20:26:19 hub_cap: i think file injection will better, but is need feature to update config on fly and restart guestagent? 20:26:36 im not sure thre is a need for that now ashestakov 20:26:46 if there is in the future, i think the taskmgr is a good place to do that 20:26:55 and the taskmgr will be doing the file injection by default 20:26:59 on create instance 20:27:10 so it wouldnt be hard to do that 20:27:15 yep 20:27:32 file injection not supported in all hypervisors though right? 20:27:33 ok so i think that pushing the config file via file injection is a good idea, just like we do with guest_info today 20:27:38 is that something we need to worry about? 20:27:38 is it not? 20:27:45 well vipul 20:27:49 if its not 20:27:53 then you shiz wont work anyway 20:27:57 cuz we inject guest_info 20:28:00 lol true :) 20:28:04 and like grapex said _thats_ the uuid 20:28:04 hub_cap: What's the difference between the "config file" and the "guest_info?" 20:28:13 grapex: if we inject them both, nothing 20:28:15 Oh- the two config files 20:28:17 ok 20:28:22 we can wrap them into one file if we want... 20:28:36 hub_cap: i was just going to say why do we have two 20:28:45 err continue to have two 20:28:48 we had two cuz 1 was static 20:28:54 and the guest_info was dynamic and injected 20:29:02 but since they are all going to be injected we can cut it down to one file 20:29:04 konetzed: That way we can build images for each environment with the static config in place 20:29:12 hub_cap: something we can revisit later 20:29:13 +1 to moving them into the same file. 20:29:18 grapex: either way 20:29:24 hub_cap: Or we could have the guest grab additional info by asking Trove for it 20:29:27 Although 20:29:27 maybe we can insert guest_info to config on instance create, and use only one file? 20:29:28 your taskmgr will have that config file for that config envriontment 20:29:36 lol, how could it ask Trove without already having it? 20:29:39 ashestakov: i think thts what we are suggesting 20:29:53 lol grapex 20:30:00 im ok with 1 config file 20:30:03 but lets leave 2 for now 20:30:09 +1 20:30:09 and make them discrete reviews / blueprints 20:30:22 lets first inject the main config and revisit it 20:30:26 sound good? 20:30:30 yeah, doesn't have to be part of the same bp 20:30:35 exactly SlickNik 20:30:47 ok ill add to this blueprint what weve discussed 20:30:50 after the meeting 20:30:52 and approve it 20:30:54 NEXT 20:30:59 is this for h3? 20:31:00 oh oh i want that to be configurable :) 20:31:02 #link https://blueprints.launchpad.net/trove/+spec/guestagent-through-userdata 20:31:10 not just send it by default 20:31:14 SlickNik: maybe but maybe not 20:31:21 to CYA in prod 20:31:27 ok vipul thats faire 20:31:28 *fair 20:31:51 ill add that as well 20:32:02 sounds good. 20:32:05 ashestakov: go ahead with the guestagent-userdata 20:32:45 so guestagent-userdata, i suggests to simple add possibility to push cloudinit script to instance 20:33:04 this script can prepare instance and setup package with agent 20:33:14 +1 to this 20:33:27 basically move the bootstrap junk we do today to userdata 20:33:28 interesting use case 20:33:39 but should make things way more flexable 20:33:47 does this require bringing back apt repo into devstack? 20:33:55 i dont think so 20:34:01 we will not do that ;) 20:34:10 i think scripth may different, depends on service type 20:34:20 agreed 20:34:27 with this is there any reason that repos couldnt be handed down to a guest? 20:34:53 repo or package? 20:34:57 repo 20:35:16 maybe we can also deploy guestagent config using this script too? (from previous blueprint) 20:35:17 cloud-init is easy to make configurable 20:35:20 really a repo is just a conf ffile 20:35:41 dukhlov: thats not a bad idea. the guest_info needs to be injected because its created on the fly for each instance 20:35:44 maybe we are getting lost in what this could all be used for 20:35:54 but the config file that has the static stuff could be done this way too 20:36:04 yes konetzed we are. i think its a good idea 20:36:10 my understanding so far is we do a firstboot.d and rsync the guest agent.. we wnat to change to to be user_data on boot 20:36:21 what all we want to inside user_data script? Just install guest agant or install all the dependancies (like, mysql)? 20:36:43 saurabhs: well for the dev env i think install the dependency too 20:37:00 but we need to make sure teh user data script is configurable so a production env can use vanilla images 20:37:07 Yea as long as the contents of that script can be driven by deployment then it shoudl be ok 20:37:12 correct vipul 20:37:27 i think that ashestakov is thinking of a vanilla image, correct? 20:37:36 hub_cap: correct 20:37:38 and we can install each service on it in teh user data script 20:37:45 so that its ready to run on create 20:37:51 or ready to "configure" on create 20:37:56 if we put too much inside user_data it increases the instance boot up/init time 20:37:57 where teh guest does the configuration 20:38:10 saurabhs: yes it does, and for development we dont want this 20:38:15 but for deployment its feasable 20:38:18 i mean, by this script we can configure selinux, iptables, repos, setup tools, setup agent, setup anything 20:38:26 saurabhs: that's why we'll make it configurable at deploy time. 20:38:27 exactly 20:38:46 we just need to make sure we're still doing a mysql image in redstack 20:38:51 so our int-tests are sane 20:39:00 +1 to that vipul 20:39:14 Otherwise it might take longer. 20:39:24 vipul: correct 20:39:35 and then ashestakov can use vanillla images w/ special networking stuff in his deplouyment if needed 20:39:43 sounds good 20:40:01 sound good ashestakov ? 20:40:11 hub_cap: yes 20:40:21 perfect ill add the summary after the meeting 20:40:23 sounds good to me as well. 20:40:25 moving on 20:40:35 KennethWilke: im saving yours 20:40:37 #link https://blueprints.launchpad.net/trove/+spec/ssh-key-option 20:40:43 i like this one as well for the record 20:40:45 also as I know in this way HEAT also works and HEAT integration will be easier for us in future 20:41:26 how are we proposing we add the key in? 20:41:31 dukhlov: all of this will help with heat integration 20:41:35 hub_cap: couldnt this be done by the last blueprint? 20:41:37 and im doing that starting this week 20:41:45 so i might be adding all this in by default ;) 20:41:49 konetzed: not exactly 20:41:57 nova boot has a special kwarg for this 20:42:04 SlickNik: for the purpose of maintancence 20:42:04 ah 20:42:06 _not_ for a customer 20:42:10 NOT NOT NOT for a customer ;) 20:42:25 for the system maintainers to log in to instances taht arent containers (lol) 20:42:36 thats how i understand it 20:42:38 correct ashestakov ? 20:42:43 hub_cap: Sounds like a great idea. 20:42:44 basiically adding a --key-name arg? 20:42:47 hub_cap: exactly 20:42:54 yes vipul 20:43:03 Seems like it would be best to add an RPC call to add this on demand as well. 20:43:09 but, will only one key for all instances? 20:43:11 Make it part of the MGMT api 20:43:19 So, I take it this would be configurable somewhere as well? 20:43:22 ashestakov: i assumed so 20:43:28 keys need to be unique to instances 20:43:33 ashestakov: If it was a MGMT api call you could pass in a password, or one would be generated and get passed back to you. 20:43:33 Will we do key mgmt for nova? 20:43:46 adrian_otto: this is for maintaincence w/o a keyserver so im not sure thatll be the case 20:43:49 I mean, what if the key name isn't already in nova? 20:44:01 the customer wont be able to create or use teh key 20:44:14 even for maintenance, the best practice is not to share credentials within a grouping of resources. 20:44:14 trove currrently boots an instance in a non-shared tenant 20:44:20 so that keyname has to be exist in each tenant 20:44:45 you can use a keyserver if you want a single credential to yield more credentials. 20:45:13 so based on what adrian_otto and vipul say, we should probably investigate this more 20:45:38 ashestakov: lets focus this blueprint based on security concerns and talk about it next week 20:45:41 Yes, this seems problematic as is. 20:46:06 actually, we can push key through cloudinit :) 20:46:11 ashestakov: can you answer the questions (by next week) on 1) different keys for each instance, 2) keys belonging to each tenant 20:46:16 thre you go ;) 20:46:23 ashestakov: given you can do that, its up to you to decide how secure to make it eh? 20:46:26 and we can keep it out of trove 20:46:35 ashestakov: that might be a better option to consider. 20:46:44 as an operator you can decide to put the same key on all instances, but the system should support the ability to put a unique key on each instance to allow for the best practice to be applied. 20:46:46 Let's think about this one and discuss it some more. 20:47:00 adrian_otto: i agree w that 20:47:01 adrian_otto: +1 20:47:08 def adrian_otto 20:47:14 lol 20:47:14 so lets leave this as not approved and we can discuss more ashestakov 20:47:27 i think he meant def adrian_otto(self): 20:47:32 i think so too 20:47:51 just wanted to quickly say 20:47:53 #link https://blueprints.launchpad.net/horizon/+spec/trove-support 20:48:00 there are about 100 ways of doing this w/o putting a key on 20:48:01 there is now a blueprint to add trove support to horizon 20:48:07 heh, just re-read my last comment 20:48:12 agreed konetzed 20:48:18 i type slow :( 20:48:33 hub_cap: Nice! 20:48:38 Awesome! 20:48:46 maybe we can have the work robertmyers did (and AGiardini updated) pushed up for review 20:48:47 so we're able to do this now i take it 20:48:52 yes 20:48:57 well they said there was no hard and fast rule 20:48:58 sweet robertmyers! 20:49:01 That would be awesome. 20:49:03 yea let's do it.. i know some HP'ers have had some issues getting it running 20:49:06 and that the other projects did it /after/ integration 20:49:09 this will help keep it running :) 20:49:12 but tha we could do it earlier 20:49:19 im all for it 20:49:36 ok so last BP 20:49:36 I need to dig in to the code 20:49:39 (this is fun!) 20:49:45 I left it a while ago 20:49:47 robertmyers: plz chat with AGiardini 20:49:51 hes been updating it 20:49:53 this is KennethWilke, I take it 20:49:56 dont want to nullify what hes done 20:49:59 yes it is KennethWilke 20:50:02 #link https://blueprints.launchpad.net/trove/+spec/taskmanager-statusupdate 20:50:25 +100 20:50:33 so want me to summarize KennethWilke? 20:50:38 ill go for it 20:50:42 kk 20:50:53 generally speaking i don't like the idea of the guest agent communicating directly with mysql or whatever the db is that the trove database lives on 20:51:14 #agreed 20:51:17 I don't like that either. 20:51:22 i think no one does 20:51:23 i dont think anyone likes it 20:51:28 based on my current understanding the best place for this to take place would be the taskmanager, but others have brought forth alternatives to this as well 20:51:29 exept amytron 20:51:38 trove-conductor !! 20:51:42 I think it's one of those necessary-evil artifacts from the past. 20:51:59 hub_cap: not a bad idea 20:52:01 konetzed: and i talked about creating a new manager to do this 20:52:06 hub_cap: me?! 20:52:09 since nova has a nova-conductor 20:52:11 yep, please don't add it to taskmanager 20:52:18 lol, trove-conductor was exactly what I was thinking. 20:52:19 that proxies the db work the computes do 20:52:25 also, i actually like what nova has done 20:52:26 #link https://github.com/openstack/nova/tree/master/nova/servicegroup 20:52:33 we would have a trove-conductor to proxy the stuff that the guest does 20:52:40 this is the compute status API.. 20:52:51 which is what everythng goes through to get status of a compute node 20:52:54 so we could do the same 20:53:13 so if we don't want to put these health check updates in DB, we don't have to 20:53:27 vipul: Great idea. 20:53:30 hmm 20:53:35 thats a good idea i think 20:53:43 i figured we would get tehre 20:53:54 +1 to zookeeper ;) 20:53:58 i understand if the community would like to go a route similar to nova-conductor, but if we go in that direction i am not confident i have the requisite understand to take care of this in a timely manner 20:54:01 hub_cap: Memories. :) 20:54:13 right grapex?????????? 20:54:18 hub_cap: i think db first and pick a better store second 20:54:28 inside joke ? 20:54:29 lol 20:54:29 timely schmimely KennethWilke 20:54:41 vipul: a long while ago we started a java+zk POC for trove 20:54:42 vipul: Our pre-OpenStack stuff used ZooKeeper. 20:54:45 well, db implementation is the first one that's absolutely needed. 20:54:47 before we went the openstack route 20:54:55 yes SlickNik i agree w/ that 20:54:55 i'm not saying we need to support multiple stores, but support the driver concept 20:54:56 other impl's can come later. 20:55:03 SlickNik: yep 20:55:05 amytron: BOOYA 20:55:10 i think the db is the worst place to store this info 20:55:22 mainly cuz who cares about it historically 20:55:23 konetzed: long term so do i 20:55:33 so lets not get into it too too much 20:55:35 but like i said first db then stomething else 20:55:37 lets approve the BP 20:55:38 yep, abstract where it's stored. 20:55:42 +1 20:55:48 its /stored/ 20:55:52 I like that it abstracts out how we grab the heart beat. I can help KennethWilke- I really like this servicegroup drivers thing Vipul pointed out. 20:56:03 i think we may need a different BP if we're going the conductor route 20:56:06 yes as do i grapex 20:56:11 KennethWilke: we can mod the blueprint 20:56:30 to keep the history of it 20:56:36 ok les move to open discussion 20:56:38 alrighty 20:56:41 ashestakov: had another question 20:56:46 #topic open discussion 20:56:51 go ahead ashestakov 20:57:00 hub_cap: yep, how to separate guestagent from trove? 20:57:05 Ahhh yes 20:57:09 we want to do that 20:57:11 i mean for packaging and deployment 20:57:17 to make a separate project, right? 20:57:24 trove-guest/ or whatever 20:57:31 maybe, or separate setup.py 20:57:32 ashestakov: i think that all of core wants that 20:57:38 different codebase, same project? 20:57:52 either or adrian_otto.. 20:57:55 an OpenStack project can have multiple named codebases, for this very purpose 20:58:01 i think we'd want to a trove-agent repo or something so it works well with the CI tools 20:58:07 because they may be distributed separately 20:58:10 well mordred says its not a good idea to have different codebases 20:58:14 due to the setup.py stuff 20:58:18 what? 20:58:24 * mordred reads 20:58:25 2 projects 1 codebase 20:58:26 no 20:58:28 mordred: ^ ^ 20:59:01 hub_cap: nope. it's TOTALLY possible to have two different repos run by the same project 20:59:07 yes! 20:59:11 different repos 20:59:17 what is not allowed is subdirs in the same repo each with their own setuip.py 20:59:20 as in 2 github repos correct mordred? 20:59:31 yep that's what i thought 20:59:42 yes not /root/{trove,trove-guest}/setup.py 20:59:57 is taht what you meant adrian_otto? 21:00:02 we meant to create a new repo 21:00:13 if your intent is to distribute them separately, then yes, two separate repos (still withinn the Trove project), each in a separate repo and bundled separately for ditribution. 21:00:24 correct 21:00:29 cool 21:00:34 * mordred injects giant head into the conversation ... 21:00:41 * hub_cap runs 21:00:42 Yup. All this is possible when we separate them into two repos. 21:00:46 * imsplitbit runs too 21:00:58 heat also has in-instance stuff ... you guys have at least looked at the overlap, yeah? 21:01:01 * hub_cap is crushed by mordreds giant head as eh wields it around 21:01:02 time is up 21:01:13 yes lets chat in #openstack-trove 21:01:15 #endmeeting