20:00:32 <hub_cap> #startmeeting trove
20:00:37 <openstack> Meeting started Wed Aug  7 20:00:32 2013 UTC and is due to finish in 60 minutes.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:40 <openstack> The meeting name has been set to 'trove'
20:00:44 <KennethWilke> hello!
20:00:51 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting
20:00:56 <hub_cap> moving my desk inside brb
20:01:03 <ashestakov> hi guys
20:01:11 <ashestakov> #help
20:01:36 <hub_cap> hi ashestakov
20:01:43 <hub_cap> welcome
20:02:03 <hub_cap> got a packed meeting today
20:02:08 <hub_cap> lets get started w/ last wks action items
20:02:23 <vipul> o/
20:02:27 <hub_cap> #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-07-31-20.00.html
20:02:37 <hub_cap> ^ ^ not updated on teh meeting page, so use this link
20:02:55 <hub_cap> only one AI, vipul get Nik
20:03:01 <hub_cap> #topic action items
20:03:16 <vipul> SlickNiTABTAB
20:03:16 <hub_cap> oh i know this one
20:03:20 <hub_cap> exactly
20:03:28 <hub_cap> so, ive added teh core team to the -ptl group
20:03:30 <vipul> i holla'd at him
20:03:34 <hub_cap> so we can all upload to pypi
20:03:44 <hub_cap> and i uploaded a new tag to pypi
20:03:53 <hub_cap> as per mordreds request
20:04:11 <SlickNik> sorry, running a bit late.
20:04:12 <hub_cap> so SlickNik grapex vipul you can all tag
20:04:28 <SlickNik> thanks for explaining the tagging situation hub_cap
20:04:32 <adrian_otto> hi
20:04:34 <hub_cap> np!
20:04:37 <hub_cap> hi adrian_otto
20:04:38 <hub_cap> #topic clustering api update
20:04:40 <imsplitbit> hello
20:04:43 <hub_cap> imsplitbit: anything to report here?
20:04:44 <imsplitbit> yay!
20:04:47 <imsplitbit> well sure
20:04:52 <hub_cap> GO GO GO
20:04:52 <imsplitbit> I got hung up on testing
20:04:59 <cp16net> hi
20:05:01 <imsplitbit> but got something worked out and have unittests done
20:05:09 <hub_cap> awersome
20:05:14 <imsplitbit> I started working on adding clustertypes to troveclient
20:05:21 <imsplitbit> this is all just for clustertypes btw
20:05:21 <SlickNik> #link https://wiki.openstack.org/wiki/GerritJenkinsGithub#Tagging_a_Release
20:05:22 <imsplitbit> sorry
20:05:30 <imsplitbit> but yeah I'm close to having that done
20:05:33 <hub_cap> cool. good first steps
20:05:37 <imsplitbit> should be today or tomorrow
20:05:41 <vipul> cool looking forward to it imsplitbit
20:05:54 <hub_cap> great work. maybe consider pushing a review for us to look at?
20:06:00 <hub_cap> once the clustertypes is done
20:06:07 <imsplitbit> as soon as I have client
20:06:10 <imsplitbit> I will push both
20:06:15 <hub_cap> yes perfect
20:06:17 <imsplitbit> that way they can be reviewed/tested
20:06:25 <hub_cap> okey. moving on if no questions
20:06:47 <hub_cap> #topic docstring rules
20:06:54 <hub_cap> so SlickNik did some great work to get developer docs
20:06:58 <hub_cap> SlickNik: can u link em
20:07:11 <hub_cap> but he mentioned that we arent adding our module docs, cuz frankly
20:07:14 <hub_cap> they suck
20:07:25 <SlickNik> #link http://docs.openstack.org/developer/trove/
20:07:29 <hub_cap> so we brainstormed on adding a soft rule to our reviews
20:07:49 <hub_cap> 1) new methods must have a docstring (unles tey are like, 1 liner @properties, use good judgement)
20:08:02 <hub_cap> 2) if you mod an existing method, add a docstring to it, but dont go doc'ing up the whole module
20:08:18 <hub_cap> does that sound fair to everyone?
20:08:22 <grapex> hub_cap: Should doc'ing up the whole module be in a seperate PR?
20:08:23 <KennethWilke> i am in favor of this
20:08:29 <hub_cap> grapex: id think so
20:08:31 <grapex> hub_cap: Sounds good.
20:08:40 <hub_cap> if you _want_ to go doc up a module, do it, for sure
20:08:41 <vipul> works for me
20:08:46 <hub_cap> but not _in_ another review
20:08:53 <hub_cap> too much for one review id think
20:08:55 <kevinconway> hub_cap: will there be any soft rules around pep8 or pyflakes validation?
20:09:15 <hub_cap> arent those hard rules already?
20:09:19 <KennethWilke> hub_cap: should arguments for functions and such be doc'd for sphinx?
20:09:26 <KennethWilke> and are there any rules on that
20:09:27 <hub_cap> the jenkins builds run pep8
20:09:32 <kevinconway> pyflakes for sure has a lot of errors ignored
20:09:35 <grapex> KennethWilke: Good point.
20:09:47 <hub_cap> i think thats fair KennethWilke
20:09:55 <hub_cap> kevinconway: you mean the ones ignored in tox.ini?
20:09:55 <earthpiper> I vote we use sphinx like doc strings.
20:10:00 <clarkb> hub_cap: kevinconway: if you use flake8 you get both
20:10:08 <clarkb> most other projects have switched
20:10:15 <SlickNik> earthpiper: we would have to. The documentation is built using sphinx.
20:10:25 <hub_cap> we are using flake8 for our [...:pep8] thing in tox.ini
20:10:29 <hub_cap> clarkb:  ^ ^
20:10:45 <datsun180b> tox -epep8 just calls flake8
20:10:47 <hub_cap> #link https://github.com/openstack/trove/blob/master/tox.ini
20:10:47 <clarkb> perfect
20:10:55 <kevinconway> i only ask because pyflakes enforces their style guide which is a superset of PEP8
20:11:07 <kevinconway> curious if we have any desire to use that or if we just need PEP8
20:11:13 <hub_cap> are we ignoring _more_ than other projects?
20:11:41 <hub_cap> id prefer we go w/ the other projects just cuz i dont want someone whos worked on another project ccome along and get failures cuz our flake setup is diff
20:11:58 <hub_cap> if we are ignoring more, then we need to add some tasks to fix them in the codebase tho
20:12:07 <grapex> hub_cap: Yes, that would be awfully flakey of us (no pun intended)
20:12:14 <hub_cap> ;) grapex
20:12:31 <hub_cap> i think its a fair point tho kevinconway and lets take it to the chat room to discuss examples in detail
20:12:34 <hub_cap> sound good?
20:12:46 <SlickNik> I think that sounds good.
20:12:47 <hub_cap> cuz we might rule differently on some
20:12:48 <kevinconway> that sounds good
20:12:50 <hub_cap> word
20:12:56 <hub_cap> #action DOC DOC DOC
20:13:00 <hub_cap> ;)
20:13:23 <hub_cap> #topic NRDB ammend to trove
20:13:31 <hub_cap> just wanted ot say that the TC ruled on the topic
20:13:33 <hub_cap> and we are good!
20:13:37 <KennethWilke> weee
20:13:37 <konetzed> YEA!
20:13:37 <SlickNik> It would also be nice if someone could compare the lists of errors that we're ignoring vs what a couple of other projects are ignoring as examples.
20:13:40 <hub_cap> \o/
20:13:46 <hub_cap> +1 SlickNik
20:13:48 <SlickNik> Hell yea!
20:13:51 <vipul> and someone blogged about it!
20:14:00 <hub_cap> vipul: link it ;)
20:14:09 <vipul> #link v
20:14:10 <vipul> http://www.zerobanana.com/archive/2013/08/07
20:14:15 <vipul> you already seen
20:14:28 <hub_cap> also, i ammended the mission to add the word 'provisioning' as per the TC's request
20:14:36 <hub_cap> i know i just wanted others to see ;)
20:14:47 <imsplitbit> hub_cap: :)
20:14:48 <KennethWilke> sounds good
20:14:52 <grapex> hub_cap: Nice.
20:15:06 <hub_cap> ok now for the fun part!!
20:15:16 <hub_cap> first lets discuss rpm integration
20:15:19 <hub_cap> #topic rpm integration
20:15:27 <hub_cap> ashestakov: care to comment on this?
20:15:55 <ashestakov> hub_cap: i just finished redhat class and tested it on fedora and centos
20:16:08 <hub_cap> wonderful!!
20:16:18 <ashestakov> can i commit it to this change https://review.openstack.org/#/c/36337/ ?
20:16:21 <hub_cap> safe to assume we can expect a review somewhat soon?
20:16:41 <hub_cap> hmm that might be fair since its mostly reviewed already
20:16:43 <hub_cap> want to take it over
20:16:44 <hub_cap> ?
20:16:55 <hub_cap> anyone opposed to that? vipul SlickNik grapex ?
20:16:59 <hub_cap> for ashestakov to take it over
20:17:08 <hub_cap> also ashestakov woudl u like to introduce yourself? so people know you
20:17:22 <grapex> hub_cap: I'm fine with it, although it may be easier for ashestakov if what's there got merged first.
20:17:28 <vipul> I can take a look
20:17:29 <grapex> Seemed pretty close.
20:17:45 <vipul> oh i can retract my -1 if adding a test isn't worth it
20:17:49 <hub_cap> it _is_ but the rhel impl is a complete waste
20:17:55 <hub_cap> in that review as it is
20:18:00 <hub_cap> its just pass lol
20:18:10 <hub_cap> vipul: well its hard ot do that w/o just faking the existence of the files
20:18:18 <hub_cap> which then is just validating that the Base stuff works, which it does
20:18:27 <hub_cap> whats funny is that py26 is run on a centos machine
20:18:34 <ashestakov> but i finished only pkg things, still have distributive specific thigs in mysql_service
20:18:37 <hub_cap> so it was failing at first because it was grabbing the rhel manager lol
20:18:53 <hub_cap> ok ashestakov lets merge that review then
20:18:59 <hub_cap> and then you can create a new review
20:19:09 <hub_cap> ill talk to vipul about merging it today so you can make progress
20:19:16 <vipul> cool beans
20:19:35 <vipul> ashestakov: we never got an intro :)
20:19:44 <hub_cap> id LOVE to see rpm integration before h3 is cut. that will be awesome!
20:20:41 <SlickNik> I'm for merging this piece in first so that we don't have a gargantuan review later.
20:20:47 <hub_cap> +1
20:20:50 <hub_cap> ok moving on?
20:21:02 <ashestakov> hub_cap: still not clear there trove/guestagent/manager/mysql_service.py, have i add if/else to detect what tool use to enable mysql onboot?
20:21:26 <hub_cap> yes thats a valid quesiton i dont think we answered. can we chat about it in #openstack-trove after the weekly meeting?
20:21:45 <ashestakov> ok
20:21:51 <hub_cap> and thank you for your work getting rpm stuff working ashestakov
20:21:59 <hub_cap> NOW The fun part, more ashestakov talking!
20:22:04 <hub_cap> #topic new blueprints
20:22:19 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/guest-config-through-metadata
20:22:23 <hub_cap> lets start with this
20:22:34 <hub_cap> i feel like this is straightforward.
20:22:48 <ashestakov> so i suggests to push trove-guestagent.conf though metadata, like guest_info
20:22:53 <hub_cap> the guest config can be written to metadata server, and pulled down on install
20:22:58 <vipul> file injection?
20:23:05 <ashestakov> vipul: yes
20:23:08 <hub_cap> so metadata != file injection
20:23:10 <hub_cap> ok
20:23:25 <hub_cap> so the taskmgr sends /both/ configs down
20:23:25 <konetzed> I like it, would like to know more about it
20:23:49 <hub_cap> i think thats a fair point. we might even be able to template it
20:23:50 <ashestakov> actually its question to discuss how to push it
20:24:12 <hub_cap> do you mean whether we push it via the metadata service, or thru file_injection?
20:24:13 <grapex> A good thing to think about also is what gives the guest it's identity.
20:24:21 <grapex> Now it looks in the config file which has the instance ID
20:24:39 <hub_cap> tahts already pushed thru file injection grapex
20:24:42 <grapex> in the past it would make a call to hostname, and then talk back to the central database on startup to determine what's ID was... which was pretty goofy. :p
20:24:55 <hub_cap> yes it was lol ;)
20:24:58 <hub_cap> but it worked!!
20:25:04 <grapex> Sorry, you said metadata != file injection and I read it was we'd be replacing file injection.
20:25:07 <grapex> N/m
20:25:16 <hub_cap> is that what you are asking ashestakov?
20:25:18 <vipul> is this similar to the config drive stuff?
20:25:27 <hub_cap> whether to use metadata server vs file injection?
20:26:19 <ashestakov> hub_cap: i think file injection will better, but is need feature to update config on fly and restart guestagent?
20:26:36 <hub_cap> im not sure thre is a need for that now ashestakov
20:26:46 <hub_cap> if there is in the future, i think the taskmgr is a good place to do that
20:26:55 <hub_cap> and the taskmgr will be doing the file injection by default
20:26:59 <hub_cap> on create instance
20:27:10 <hub_cap> so it wouldnt be hard to do that
20:27:15 <ashestakov> yep
20:27:32 <vipul> file injection not supported in all hypervisors though right?
20:27:33 <hub_cap> ok so i think that pushing the config file via file injection is a good idea, just like we do with guest_info today
20:27:38 <vipul> is that something we need to worry about?
20:27:38 <hub_cap> is it not?
20:27:45 <hub_cap> well vipul
20:27:49 <hub_cap> if its not
20:27:53 <hub_cap> then you shiz wont work anyway
20:27:57 <hub_cap> cuz we inject guest_info
20:28:00 <vipul> lol true :)
20:28:04 <hub_cap> and like grapex said _thats_ the uuid
20:28:04 <grapex> hub_cap: What's the difference between the "config file" and the "guest_info?"
20:28:13 <hub_cap> grapex: if we inject them both, nothing
20:28:15 <grapex> Oh- the two config files
20:28:17 <grapex> ok
20:28:22 <hub_cap> we can wrap them into one file if we want...
20:28:36 <konetzed> hub_cap: i was just going to say why do we have two
20:28:45 <konetzed> err continue to have two
20:28:48 <hub_cap> we had two cuz 1 was static
20:28:54 <hub_cap> and the guest_info was dynamic and injected
20:29:02 <hub_cap> but since they are all going to be injected we can cut it down to one file
20:29:04 <grapex> konetzed: That way we can build images for each environment with the static config in place
20:29:12 <konetzed> hub_cap: something we can revisit later
20:29:13 <SlickNik> +1 to moving them into the same file.
20:29:18 <hub_cap> grapex: either way
20:29:24 <grapex> hub_cap: Or we could have the guest grab additional info by asking Trove for it
20:29:27 <grapex> Although
20:29:27 <ashestakov> maybe we can insert guest_info to config on instance create, and use only one file?
20:29:28 <hub_cap> your taskmgr will have that config file for that config envriontment
20:29:36 <grapex> lol, how could it ask Trove without already having it?
20:29:39 <hub_cap> ashestakov: i think thts what we are suggesting
20:29:53 <hub_cap> lol grapex
20:30:00 <hub_cap> im ok with 1 config file
20:30:03 <hub_cap> but lets leave 2 for now
20:30:09 <konetzed> +1
20:30:09 <hub_cap> and make them discrete reviews / blueprints
20:30:22 <hub_cap> lets first inject the main config and revisit it
20:30:26 <hub_cap> sound good?
20:30:30 <SlickNik> yeah, doesn't have to be part of the same bp
20:30:35 <hub_cap> exactly SlickNik
20:30:47 <hub_cap> ok ill add to this blueprint what weve discussed
20:30:50 <hub_cap> after the meeting
20:30:52 <hub_cap> and approve it
20:30:54 <hub_cap> NEXT
20:30:59 <SlickNik> is this for h3?
20:31:00 <vipul> oh oh i want that to be configurable :)
20:31:02 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/guestagent-through-userdata
20:31:10 <vipul> not just send it by default
20:31:14 <hub_cap> SlickNik: maybe but maybe not
20:31:21 <vipul> to CYA in prod
20:31:27 <hub_cap> ok vipul thats faire
20:31:28 <hub_cap> *fair
20:31:51 <hub_cap> ill add that as well
20:32:02 <SlickNik> sounds good.
20:32:05 <hub_cap> ashestakov: go ahead with the guestagent-userdata
20:32:45 <ashestakov> so guestagent-userdata, i suggests to simple add possibility to push cloudinit script to instance
20:33:04 <ashestakov> this script can prepare instance and setup package with agent
20:33:14 <hub_cap> +1 to this
20:33:27 <hub_cap> basically move the bootstrap junk we do today to userdata
20:33:28 <konetzed> interesting use case
20:33:39 <konetzed> but should make things way more flexable
20:33:47 <vipul> does this require bringing back apt repo into devstack?
20:33:55 <hub_cap> i dont think so
20:34:01 <hub_cap> we will not do that ;)
20:34:10 <ashestakov> i think scripth may different, depends on service type
20:34:20 <hub_cap> agreed
20:34:27 <konetzed> with this is there any reason that repos couldnt be handed down to a guest?
20:34:53 <vipul> repo or package?
20:34:57 <konetzed> repo
20:35:16 <dukhlov> maybe we can also deploy guestagent config using this script too? (from previous blueprint)
20:35:17 <hub_cap> cloud-init is easy to make configurable
20:35:20 <konetzed> really a repo is just a conf ffile
20:35:41 <hub_cap> dukhlov: thats not a bad idea. the guest_info needs to be injected because its created on the fly for each instance
20:35:44 <konetzed> maybe we are getting lost in what this could all be used for
20:35:54 <hub_cap> but the config file that has the static stuff could be done this way too
20:36:04 <hub_cap> yes konetzed we are. i think its a good idea
20:36:10 <vipul> my understanding so far is we do a firstboot.d and rsync the guest agent.. we wnat to change to to be user_data on boot
20:36:21 <saurabhs> what all we want to inside user_data script? Just install guest agant or install all the dependancies (like, mysql)?
20:36:43 <hub_cap> saurabhs: well for the dev env i think install the dependency too
20:37:00 <hub_cap> but we need to make sure teh user data script is configurable so a production env can use vanilla images
20:37:07 <vipul> Yea as long as the contents of that script can be driven by deployment then it shoudl be ok
20:37:12 <hub_cap> correct vipul
20:37:27 <hub_cap> i think that ashestakov is thinking of a vanilla image, correct?
20:37:36 <ashestakov> hub_cap: correct
20:37:38 <hub_cap> and we can install each service on it in teh user data script
20:37:45 <hub_cap> so that its ready to run on create
20:37:51 <hub_cap> or ready to "configure" on create
20:37:56 <saurabhs> if we put too much inside user_data it increases the instance boot up/init time
20:37:57 <hub_cap> where teh guest does the configuration
20:38:10 <hub_cap> saurabhs: yes it does, and for development we dont want this
20:38:15 <hub_cap> but for deployment its feasable
20:38:18 <ashestakov> i mean, by this script we can configure selinux, iptables, repos, setup tools, setup agent, setup anything
20:38:26 <SlickNik> saurabhs: that's why we'll make it configurable at deploy time.
20:38:27 <hub_cap> exactly
20:38:46 <vipul> we just need to make sure we're still doing a mysql image in redstack
20:38:51 <vipul> so our int-tests are sane
20:39:00 <SlickNik> +1 to that vipul
20:39:14 <SlickNik> Otherwise it might take longer.
20:39:24 <hub_cap> vipul: correct
20:39:35 <hub_cap> and then ashestakov can use vanillla images w/ special networking stuff in his deplouyment if needed
20:39:43 <vipul> sounds good
20:40:01 <hub_cap> sound good ashestakov ?
20:40:11 <ashestakov> hub_cap: yes
20:40:21 <hub_cap> perfect ill add the summary after the meeting
20:40:23 <SlickNik> sounds good to me as well.
20:40:25 <hub_cap> moving on
20:40:35 <hub_cap> KennethWilke: im saving yours
20:40:37 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/ssh-key-option
20:40:43 <hub_cap> i like this one as well for the record
20:40:45 <dukhlov> also as I know in this way HEAT also works and HEAT integration will be easier for us in future
20:41:26 <SlickNik> how are we proposing we add the key in?
20:41:31 <hub_cap> dukhlov: all of this will help with heat integration
20:41:35 <konetzed> hub_cap: couldnt this be done by the last blueprint?
20:41:37 <hub_cap> and im doing that starting this week
20:41:45 <hub_cap> so i might be adding all this in by default ;)
20:41:49 <hub_cap> konetzed:  not exactly
20:41:57 <hub_cap> nova boot has a special kwarg for this
20:42:04 <hub_cap> SlickNik:  for the purpose of maintancence
20:42:04 <konetzed> ah
20:42:06 <hub_cap> _not_ for a customer
20:42:10 <hub_cap> NOT NOT NOT for a customer ;)
20:42:25 <hub_cap> for the system maintainers to log in to instances taht arent containers (lol)
20:42:36 <hub_cap> thats how i understand it
20:42:38 <hub_cap> correct ashestakov ?
20:42:43 <grapex> hub_cap: Sounds like a great idea.
20:42:44 <vipul> basiically adding a --key-name arg?
20:42:47 <ashestakov> hub_cap: exactly
20:42:54 <hub_cap> yes vipul
20:43:03 <grapex> Seems like it would be best to add an RPC call to add this on demand as well.
20:43:09 <ashestakov> but, will only one key for all instances?
20:43:11 <grapex> Make it part of the MGMT api
20:43:19 <SlickNik> So, I take it this would be configurable somewhere as well?
20:43:22 <hub_cap> ashestakov: i assumed so
20:43:28 <adrian_otto> keys need to be unique to instances
20:43:33 <grapex> ashestakov: If it was a MGMT api call you could pass in a password, or one would be generated and get passed back to you.
20:43:33 <SlickNik> Will we do key mgmt for nova?
20:43:46 <hub_cap> adrian_otto: this is for maintaincence w/o a keyserver so im not sure thatll be the case
20:43:49 <SlickNik> I mean, what if the key name isn't already in nova?
20:44:01 <hub_cap> the customer wont be able to create or use teh key
20:44:14 <adrian_otto> even for maintenance, the best practice is not to share credentials within a grouping of resources.
20:44:14 <vipul> trove currrently boots an instance in a non-shared tenant
20:44:20 <vipul> so that keyname has to be exist in each tenant
20:44:45 <adrian_otto> you can use a keyserver if you want a single credential to yield more credentials.
20:45:13 <hub_cap> so based on what adrian_otto and vipul say, we should probably investigate this more
20:45:38 <hub_cap> ashestakov: lets focus this blueprint based on security concerns and talk about it next week
20:45:41 <SlickNik> Yes, this seems problematic as is.
20:46:06 <ashestakov> actually, we can push key through cloudinit :)
20:46:11 <hub_cap> ashestakov: can you answer the questions (by next week) on 1) different keys for each instance, 2) keys belonging to each tenant
20:46:16 <vipul> thre you go ;)
20:46:23 <hub_cap> ashestakov: given you can do that, its up to you to decide how secure to make it eh?
20:46:26 <hub_cap> and we can keep it out of trove
20:46:35 <SlickNik> ashestakov: that might be a better option to consider.
20:46:44 <adrian_otto> as an operator you can decide to put the same key on all instances, but the system should support the ability to put a  unique key on each instance to allow for the best practice to be applied.
20:46:46 <SlickNik> Let's think about this one and discuss it some more.
20:47:00 <hub_cap> adrian_otto: i agree w that
20:47:01 <vipul> adrian_otto: +1
20:47:08 <SlickNik> def adrian_otto
20:47:14 <KennethWilke> lol
20:47:14 <hub_cap> so lets leave this as not approved and we can discuss more ashestakov
20:47:27 <hub_cap> i think he meant def adrian_otto(self):
20:47:32 <KennethWilke> i think so too
20:47:51 <hub_cap> just wanted to quickly say
20:47:53 <hub_cap> #link https://blueprints.launchpad.net/horizon/+spec/trove-support
20:48:00 <konetzed> there are about 100 ways of doing this w/o putting a key on
20:48:01 <hub_cap> there is now a blueprint to add trove support to horizon
20:48:07 <SlickNik> heh, just re-read my last comment
20:48:12 <hub_cap> agreed konetzed
20:48:18 <konetzed> i type slow :(
20:48:33 <vipul> hub_cap: Nice!
20:48:38 <SlickNik> Awesome!
20:48:46 <hub_cap> maybe we can have the work robertmyers did (and AGiardini updated) pushed up for review
20:48:47 <vipul> so we're able to do this now i take it
20:48:52 <robertmyers> yes
20:48:57 <hub_cap> well they said there was no hard and fast rule
20:48:58 <SlickNik> sweet robertmyers!
20:49:01 <grapex> That would be awesome.
20:49:03 <vipul> yea let's do it.. i know some HP'ers have had some issues getting it running
20:49:06 <hub_cap> and that the other projects did it /after/ integration
20:49:09 <vipul> this will help keep it running :)
20:49:12 <hub_cap> but tha we could do it earlier
20:49:19 <hub_cap> im all for it
20:49:36 <hub_cap> ok so last BP
20:49:36 <robertmyers> I need to dig in to the code
20:49:39 <hub_cap> (this is fun!)
20:49:45 <robertmyers> I left it a while ago
20:49:47 <hub_cap> robertmyers: plz chat with AGiardini
20:49:51 <hub_cap> hes been updating it
20:49:53 <SlickNik> this is KennethWilke, I take it
20:49:56 <hub_cap> dont want to nullify what hes done
20:49:59 <hub_cap> yes it is KennethWilke
20:50:02 <hub_cap> #link https://blueprints.launchpad.net/trove/+spec/taskmanager-statusupdate
20:50:25 <vipul> +100
20:50:33 <hub_cap> so want me to summarize KennethWilke?
20:50:38 <KennethWilke> ill go for it
20:50:42 <hub_cap> kk
20:50:53 <KennethWilke> generally speaking i don't like the idea of the guest agent communicating directly with mysql or whatever the db is that the trove database lives on
20:51:14 <hub_cap> #agreed
20:51:17 <saurabhs> I don't like that either.
20:51:22 <hub_cap> i think no one does
20:51:23 <konetzed> i dont think anyone likes it
20:51:28 <KennethWilke> based on my current understanding the best place for this to take place would be the taskmanager, but others have brought forth alternatives to this as well
20:51:29 <hub_cap> exept amytron
20:51:38 <hub_cap> trove-conductor !!
20:51:42 <SlickNik> I think it's one of those necessary-evil artifacts from the past.
20:51:59 <vipul> hub_cap: not a bad idea
20:52:01 <hub_cap> konetzed: and i talked about creating a new manager to do this
20:52:06 <amytron> hub_cap:  me?!
20:52:09 <hub_cap> since nova has a nova-conductor
20:52:11 <vipul> yep, please don't add it to taskmanager
20:52:18 <SlickNik> lol, trove-conductor was exactly what I was thinking.
20:52:19 <hub_cap> that proxies the db work the computes do
20:52:25 <vipul> also, i actually like what nova has done
20:52:26 <vipul> #link https://github.com/openstack/nova/tree/master/nova/servicegroup
20:52:33 <hub_cap> we would have a trove-conductor to proxy the stuff that the guest does
20:52:40 <vipul> this is the compute status API..
20:52:51 <vipul> which is what everythng goes through to get status of a compute node
20:52:54 <vipul> so we could do the same
20:53:13 <vipul> so if we don't want to put these health check updates in DB, we don't have to
20:53:27 <grapex> vipul: Great idea.
20:53:30 <hub_cap> hmm
20:53:35 <hub_cap> thats a good idea i think
20:53:43 <konetzed> i figured we would get tehre
20:53:54 <hub_cap> +1 to zookeeper ;)
20:53:58 <KennethWilke> i understand if the community would like to go a route similar to nova-conductor, but if we go in that direction i am not confident i have the requisite understand to take care of this in a timely manner
20:54:01 <grapex> hub_cap: Memories. :)
20:54:13 <hub_cap> right grapex??????????
20:54:18 <konetzed> hub_cap: i think db first and pick a better store second
20:54:28 <vipul> inside joke ?
20:54:29 <vipul> lol
20:54:29 <hub_cap> timely schmimely KennethWilke
20:54:41 <hub_cap> vipul: a long while ago we started a java+zk POC for trove
20:54:42 <grapex> vipul: Our pre-OpenStack stuff used ZooKeeper.
20:54:45 <SlickNik> well, db implementation is the first one that's absolutely needed.
20:54:47 <hub_cap> before we went the openstack route
20:54:55 <hub_cap> yes SlickNik i agree w/ that
20:54:55 <vipul> i'm not saying we need to support multiple stores, but support the driver concept
20:54:56 <SlickNik> other impl's can come later.
20:55:03 <konetzed> SlickNik: yep
20:55:05 <hub_cap> amytron: BOOYA
20:55:10 <konetzed> i think the db is the worst place to store this info
20:55:22 <konetzed> mainly cuz who cares about it historically
20:55:23 <hub_cap> konetzed: long term so do i
20:55:33 <hub_cap> so lets not get into it too too much
20:55:35 <konetzed> but like i said first db then stomething else
20:55:37 <hub_cap> lets approve the BP
20:55:38 <adrian_otto> yep, abstract where it's stored.
20:55:42 <hub_cap> +1
20:55:48 <hub_cap> its /stored/
20:55:52 <grapex> I like that it abstracts out how we grab the heart beat. I can help KennethWilke- I really like this servicegroup drivers thing Vipul pointed out.
20:56:03 <KennethWilke> i think we may need a different BP if we're going the conductor route
20:56:06 <hub_cap> yes as do i grapex
20:56:11 <hub_cap> KennethWilke: we can mod the blueprint
20:56:30 <hub_cap> to keep the history of it
20:56:36 <hub_cap> ok les move to open discussion
20:56:38 <KennethWilke> alrighty
20:56:41 <hub_cap> ashestakov:  had another question
20:56:46 <hub_cap> #topic open discussion
20:56:51 <hub_cap> go ahead ashestakov
20:57:00 <ashestakov> hub_cap: yep, how to separate guestagent from trove?
20:57:05 <hub_cap> Ahhh yes
20:57:09 <hub_cap> we want to do that
20:57:11 <ashestakov> i mean for packaging and deployment
20:57:17 <hub_cap> to make a separate project, right?
20:57:24 <hub_cap> trove-guest/ or whatever
20:57:31 <ashestakov> maybe, or separate setup.py
20:57:32 <hub_cap> ashestakov: i think that all of core wants that
20:57:38 <adrian_otto> different codebase, same project?
20:57:52 <hub_cap> either or adrian_otto..
20:57:55 <adrian_otto> an OpenStack project can have multiple named codebases, for this very purpose
20:58:01 <vipul> i think we'd want to a trove-agent repo or something so it works well with the CI tools
20:58:07 <adrian_otto> because they may be distributed separately
20:58:10 <hub_cap> well mordred says its not a good idea to have different codebases
20:58:14 <hub_cap> due to the setup.py stuff
20:58:18 <mordred> what?
20:58:24 * mordred reads
20:58:25 <hub_cap> 2 projects 1 codebase
20:58:26 <adrian_otto> no
20:58:28 <hub_cap> mordred: ^ ^
20:59:01 <mordred> hub_cap: nope. it's TOTALLY possible to have two different repos run by the same project
20:59:07 <adrian_otto> yes!
20:59:11 <hub_cap> different repos
20:59:17 <mordred> what is not allowed is subdirs in the same repo each with their own setuip.py
20:59:20 <hub_cap> as in 2 github repos correct mordred?
20:59:31 <vipul> yep that's what i thought
20:59:42 <hub_cap> yes not /root/{trove,trove-guest}/setup.py
20:59:57 <hub_cap> is taht what you meant adrian_otto?
21:00:02 <hub_cap> we meant to create a new repo
21:00:13 <adrian_otto> if your intent is to distribute them separately, then yes, two separate repos (still withinn the Trove project), each in a separate repo and bundled separately for ditribution.
21:00:24 <hub_cap> correct
21:00:29 <ashestakov> cool
21:00:34 * mordred injects giant head into the conversation ...
21:00:41 * hub_cap runs
21:00:42 <SlickNik> Yup. All this is possible when we separate them into two repos.
21:00:46 * imsplitbit runs too
21:00:58 <mordred> heat also has in-instance stuff ... you guys have at least looked at the overlap, yeah?
21:01:01 * hub_cap is crushed by mordreds giant head as eh wields it around
21:01:02 <adrian_otto> time is up
21:01:13 <hub_cap> yes lets chat in #openstack-trove
21:01:15 <hub_cap> #endmeeting