21:04:49 <SlickNik> #startmeeting Trove
21:04:50 <openstack> Meeting started Tue Jun 18 21:04:49 2013 UTC.  The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:04:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:04:53 <openstack> The meeting name has been set to 'trove'
21:05:25 <SlickNik> #topic Agenda Items
21:05:43 <SlickNik> #link http://eavesdrop.openstack.org/meetings/trove___reddwarf/2013/trove___reddwarf.2013-06-11-21.03.html
21:06:02 <SlickNik> esmute: the first one's us.
21:06:09 <esmute> ok
21:06:23 <esmute> im guessing is the rd jenkins log?
21:06:32 <SlickNik> yes
21:06:35 <SlickNik> esmute/SlickNik to figure out the archiving of the reddwarf logs for rdjenkins jobs.
21:06:41 <esmute> Spoke to clarkb.
21:07:03 <esmute> Since we have been incubated, we might not need to use our own rd-jenkins for rd int tests
21:07:12 <esmute> we can leverage openstack-jenkins to do this.
21:07:35 <esmute> it seems that i have to create a YAML file that describe the jenkins job
21:07:50 <esmute> and add it to to the openstack-infra project
21:07:53 <esmute> https://github.com/openstack-infra/config/tree/master/modules/openstack_project/files/jenkins_job_builder/config
21:08:00 <SlickNik> So basically the best way to move this ahead is to get the old devstack-vm-gate patch that we had for running our int-tests revived, I think.
21:08:20 <vipul> #winning
21:08:29 <esmute> once we do this, everything will take care of itself. They already have a way to create an instance with devstack...
21:08:41 <esmute> we can just install trove in there and run tests
21:08:45 <hub_cap> #tigerblood
21:09:06 <SlickNik> Okay, so let's convert this action item to do that then...
21:09:07 <esp> nice.  we can finally put rd-jenkins down like an ailing calf.
21:09:11 <esmute> the logs will be put in a wellknown directory and jenkins will pick them up and move it to the oepnstack log file server.
21:09:40 <SlickNik> hey grapex
21:09:45 <hub_cap> can we add an agenda item to talk about the reddwarf->trove module move? (editing wiki sux on phone)
21:09:52 <grapex> SlickNik: Weird... limechat just crashed. :)
21:10:00 <vipul> i like this plan more, instead of figuing out logging
21:10:09 <vipul> hub_cap: i'll add it
21:10:19 <hub_cap> wont we still have to figure out logging?
21:10:22 <SlickNik> #action esmute and SlickNik to look into what happened to devstack-vm-gate integration.
21:10:43 <vipul> openstack-infra's job
21:10:45 <grapex> Does the existing CI infrastructure capture logs of daemons on failed test runs?
21:11:02 <vipul> we'll have to figure out a way to get the guest log, sigh
21:11:06 <esmute> hub_cap: According to clarkb, as long as we put the logs in a well known dir, jenkins will push them to the log file server
21:11:13 <SlickNik> grapex: I'm not sure, that's something we'll have to talk to clarkb / infra team about.
21:11:36 <esmute> a folder in the log file server with the project name, build number and other information about the run will be created
21:12:09 <esmute> and the logs will be placed there... just like the jenkins run that execute the tox tests is doing now
21:12:14 <hub_cap> sweet im on my laptop now
21:12:25 <hub_cap> they put a plug in a breaker outside and dont need it anymore
21:12:28 <hub_cap> now i can type fast
21:12:30 <hub_cap> woot!!!!!
21:12:35 <SlickNik> welcome back… :)
21:12:39 <SlickNik> moving on.
21:12:44 <hub_cap> hopefully i can get enough charge so if they need it again i can stay on
21:12:55 <SlickNik> datsun180b  to create a doodle for meeting times
21:13:09 <grapex> Did everyone vote?
21:13:09 <SlickNik> thanks for doing that datsun180b
21:13:15 <SlickNik> even though he's not here.
21:13:21 <hub_cap> i have not voted. i will now tho
21:13:22 <esmute> grapex: I am not a citizen
21:13:25 <hub_cap> can someone link me?
21:13:29 <grapex> http://doodle.com/fvpxvyxhmc69w6s9#table
21:13:31 <esmute> vote for what?
21:13:36 <vipul> lol
21:13:40 <grapex> esmute: The meeting time.
21:13:44 <hub_cap> esmute: whether to deport you
21:13:44 <SlickNik> #link http://doodle.com/fvpxvyxhmc69w6s9erhvvpt4/admin#table
21:13:47 <hub_cap> oh thats what i meant grapex
21:14:20 <SlickNik> I think datsun180b was going to close the vote soon.
21:14:42 <SlickNik> So please vote on the new meeting time ASAP if you haven't already done so.
21:14:57 <SlickNik> Anything else to add here?
21:15:06 <juice> DON'T FORGET TO SET THE TIMEZONE WHEN YOU DO
21:15:14 <juice> wow that really sticks out :)
21:15:16 <hub_cap> we need to remove the tue 2pm pdt
21:15:23 <vipul> early results... looks liek the current time will work
21:15:33 <vipul> just need to change the day
21:15:48 <vipul> juice: OKAY
21:15:57 <SlickNik> I think that's probably what will end up happening.
21:16:01 <SlickNik> Same time, different day.
21:16:06 <imsplitbit> which day?
21:16:13 <SlickNik> But who knows — once the votes are in...
21:16:14 <vipul> TBD
21:16:35 <vipul> let's make the deadline EOW?
21:16:47 <SlickNik> I think that's reasonable.
21:16:49 <esmute> ok just voted
21:17:12 <SlickNik> Please vote before end of the week.
21:17:18 <vipul> annashen, esp ^
21:17:23 <SlickNik> I'll ask datsun180b to close the vote then
21:17:34 <SlickNik> http://doodle.com/fvpxvyxhmc69w6s9erhvvpt4/admin#table
21:17:42 <SlickNik> ^^annashen, esp
21:17:50 <SlickNik> okay, let's move on
21:18:02 <SlickNik> robertmyers add bug for backup deletion
21:18:24 <grapex> SlickNik: Rob can't be here today, but he wanted me to tell everyone his work continues. :)
21:18:29 <esp> SlickNik: ok I'm on it.
21:18:54 <SlickNik> I think he added the info already.
21:19:11 <SlickNik> Thanks grapex.
21:19:20 <juice> well said grapex
21:19:28 <juice> (golf clap)
21:19:49 <grapex> Thank you, thank you all very much.
21:19:54 * grapex blushes with pride. :)
21:20:05 <SlickNik> okay, moving on.
21:20:12 <SlickNik> Not sure who had this one (hub_cap?)
21:20:16 <SlickNik> look into Heat Agent for packaging / repository organization
21:20:21 <hub_cap> oya
21:20:34 <hub_cap> ive got the template finished for ubuntu
21:20:47 <vipul> oh this is diff - it's a carryover from separating the guest agent
21:20:48 <hub_cap> was working on changing dib / elements up fo rit
21:20:55 <hub_cap> AH
21:20:57 <vipul> into its own repo
21:21:14 <hub_cap> oh the guest, like trove-guest?
21:21:14 <vipul> i think we still need to do that..
21:21:19 <vipul> yes
21:21:30 <hub_cap> for packaging purposes its a good idea, grapex will hate it
21:21:36 <vipul> you'd be surprised :)
21:21:39 <grapex> hub_cap: I broached it last time. :)
21:21:46 <grapex> Though I originally didn't want to
21:21:53 <hub_cap> WHAT?!?!?!?!!?
21:21:54 <SlickNik> Yeah, he mentioned it the last time. :)
21:22:01 <hub_cap> u want to *cough cough* separate things???????????????????????????????????????????
21:22:18 * hub_cap 's mind is blown
21:22:29 <vipul> lol
21:22:32 <hub_cap> im all for it
21:22:35 <vipul> let'd do this!
21:22:39 <grapex> Honestly I think fewer repos would be cool... I heard the CERN is working on technology that would make packaging possible even in such terrifying circumstances.
21:22:50 <hub_cap> lol package smasher grapex
21:23:09 <SlickNik> Have found a god package yet, grapex?
21:23:10 <imsplitbit> :)
21:23:10 <grapex> But, I think for the guest having a unique structure would help out to seperate it from the Trove code, for like cfgs and stuff.
21:23:28 <hub_cap> +1 it helps for packaging too
21:24:15 <vipul> so the action was referring to Heat already doing something like this for their agent?
21:24:20 <SlickNik> I'm in favor of separating it out as well.
21:24:22 <vipul> and looking into that..
21:24:23 <grapex> Hey, I don't want to make Santa Claus's job any harder, believe me. :) I'm ok with the guest being in it's own repo.
21:25:10 <SlickNik> Anyone want to action that again for this week?
21:25:40 <vipul> wow.. no volunteers
21:25:44 <vipul> you can give it to me
21:25:49 <juice> I'll do it
21:25:54 <hub_cap> volunteers!
21:26:06 <juice> I'll be baby sitting validation
21:26:10 <juice> but nothing else on the plate
21:26:17 <esmute> what is the name of the repo? trove-agent?
21:26:29 <SlickNik> #action juice / vipul to look into how Heat achieves packaging / repo organization for its agent
21:26:40 <juice> what about shared code - are we going to have a common repo
21:26:44 <vipul> esmute: TBD
21:26:44 <SlickNik> thanks guys. ,3
21:26:46 <SlickNik> <3
21:26:51 <juice> or will trove-agent depend on trove code
21:27:04 <juice> trove-common?
21:27:13 <vipul> juice: since we're using oslo, that should be our common code (hopefully)
21:27:21 <grapex> juice: Will someone shoot us if we had five Trove repos?
21:27:32 <imsplitbit> I will
21:27:33 <juice> 3
21:27:34 <esmute> trove-agent should be talking in rabbit...heopfully there wont need for a common
21:27:36 <SlickNik> A trove of Trove repos!
21:27:41 <SlickNik> How delightful...
21:27:45 <juice> I mean common between guest agent and api proper
21:27:57 <imsplitbit> well it makes sense to have a common
21:28:19 <vipul> i wonder how much code is actually used from reddwarf.common
21:28:24 <vipul> i'd bet only a couple of classes
21:28:35 <vipul> service, and cfg maybe?
21:28:56 <kevinconway> wsgi also
21:28:59 <imsplitbit> even if it's lightweight it makes sense, or use openstack common and contrib our common utils to that?
21:29:09 <vipul> guest wont need wsgi i don't think
21:29:14 <esmute> api mostly
21:29:21 <juice> I'll do some analysis and report back
21:29:26 <vipul> kk
21:29:28 <juice> I am sure that it more than a handful
21:29:39 <SlickNik> Yeah, probably have to do some research and figure out the common footprint.
21:29:48 <juice> but nevertheless we shouldn't be copying and pasting code
21:29:51 <SlickNik> I think there might be some common instance models as well.
21:29:58 <SlickNik> But I don't know for sure off the top of my head.
21:30:14 <imsplitbit> juice: +1
21:30:22 <SlickNik> Okay, next action item.
21:30:34 <SlickNik> Vipul and SlickNik (and others) to provide feedback on Replication API
21:30:43 <esmute> SlickNik: If there is, that would violate the separation of concern principle
21:30:56 <imsplitbit> https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API
21:31:01 <SlickNik> thanks imsplitbit
21:31:05 <imsplitbit> I moved the api stuff to it's own page
21:31:08 <imsplitbit> #link https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API
21:31:24 <vipul> imsplitbit: one quick comment i had was when creating a cluster, why no flavor ?
21:31:31 <vipul> do all nodes have to be same flavor?
21:31:50 <imsplitbit> there is some debate on that
21:32:03 <imsplitbit> there's a clear argument for allowing any flavor
21:32:26 <imsplitbit> but it can also be detrimental to performance
21:32:36 <imsplitbit> if you have a 8gb master and 2 512 slaves
21:33:02 <imsplitbit> that could be bad
21:33:13 <vipul> right...
21:33:22 <imsplitbit> I would say make it optional?
21:33:37 <vipul> yea, should always worry when there's $$ involved
21:33:44 <vipul> so that optional would be find
21:33:45 <vipul> fine
21:34:42 <vipul> also is the purpose of the 'attributes' element to be a generic area like metadata?
21:34:52 <imsplitbit> yes
21:35:41 <esp> should Create Replication Set: (Previous db instance) be a PUT ?
21:36:27 <esp> eh guess maybe not
21:36:37 <vipul> to esp's point, i don't see a restful way to say.. Create an Instance via /instances and convert that to a cluster (modify the instance)
21:37:12 <vipul> also, can we do away with 'actions' PUT /clusters/{cluster_id}/actions
21:37:13 <imsplitbit> vipul: neither did I.  I was hoping for some more expertise on that particular path
21:37:54 <imsplitbit> actions was a demorris contribution, he is a big fan of using actions for things like promote
21:37:55 <vipul> since you're already doing a PUT on /clusters, why have actions
21:37:55 <esp> I think POST is fine too.  just wondering
21:38:17 <kevinconway> the question is how to RESTify the conversion of an instance to a cluster?
21:38:27 <imsplitbit> that is one question
21:38:53 <imsplitbit> clustertypes is ok?
21:38:56 <kevinconway> would it not be a POST to cluster with the instance ids?
21:39:07 <imsplitbit> kevinconway: thats what I have it as now
21:39:14 <imsplitbit> IIRC
21:39:19 <vipul> it has a side affect of modifying another resource
21:39:27 <vipul> that's the only thing in question
21:39:31 <imsplitbit> yeah
21:39:44 <vipul> regarding actions, you have                         "role": "slave",
21:39:56 <vipul> you could PUT and change the 'role' element
21:40:04 <kevinconway> can you no longer reference the instance by itself?
21:40:39 <vipul> that's a good question, can you do instance operations on a instance that is also joined to a cluster?
21:40:43 <imsplitbit> kevinconway: we had some good discussion on that.  If you make changes to an instance without using the context of the cluster you have the potential to break things in a magnificent way
21:40:46 <vipul> or shoudl you only do cluster operations?
21:41:02 <imsplitbit> we contend that once a cluster always a cluster
21:41:11 <imsplitbit> operations *should* be done on the cluster
21:41:25 <imsplitbit> because thats where the cluster aware knowledge is
21:41:25 <esp> makes sense
21:41:40 <kevinconway> i guess my question is does creating a cluster modify the resource metadata at all or simply alter the underlying instance and create a new cluster resource?
21:41:49 <SlickNik> imsplitbit: I think I agree, but what if the action only applies to one node in the cluster.
21:42:01 <imsplitbit> if you remove all slave/nodes from a cluster leaving one tho it should become just an instance
21:42:10 <kevinconway> shouldn't it be like putting a user in a user group?
21:42:32 <imsplitbit> SlickNik: there are some edge cases where it makes sense to do an operation on just a node of the cluster
21:42:41 <imsplitbit> like if you allow different flavors
21:42:48 <imsplitbit> and you need to bump the memory of one of the slaves
21:43:06 <imsplitbit> so it makes sense to allow those things at the instance level
21:43:09 <kevinconway> what if i just want to kill an instance in my cluster for a cool effect?
21:43:24 <vipul> which people will do one day one
21:43:25 <esp> can of worms (sorry)
21:43:28 <imsplitbit> but if you want to add or remove nodes it should only be done in /clusters
21:43:45 <kevinconway> not removing… lets say restarting
21:44:00 <juice> we can validate and prevent that (removing) on day one and figure out a better solution in v3
21:44:02 <juice> v2
21:44:09 <imsplitbit> kevinconway: /instances should be used for stopping and starting nodes IMO
21:44:14 <imsplitbit> but it gets confusing
21:44:31 <imsplitbit> basically any action that can damage the cluster must be done at the cluster level
21:44:41 <imsplitbit> IMO restarting a node shouldn't be destructive
21:44:48 <imsplitbit> so /instances would be the place for that
21:45:01 <imsplitbit> it's a slippery slope tho :)
21:45:25 <kevinconway> i'll go back to my user/usergroup similarity
21:45:42 <SlickNik> yeah, I can see it start to become confusing (which actions do I need to call on /instances vs /cluster?)
21:45:54 <imsplitbit> right
21:45:58 <kevinconway> is there a critical difference between the idea of clusters and the idea of user groups in terms of a rest resrouce?
21:46:23 <imsplitbit> so adding or removing nodes to a cluster id done /clusters but individual actions like resize should happen at /instances
21:46:37 <esp> kevinconway: yeah I think a cluster behave like a single thing
21:46:45 <SlickNik> Yes, kevinconway: I think there is a difference.
21:46:46 <esp> ^ behaves
21:46:49 <imsplitbit> esp: +1
21:46:58 <kevinconway> esp: so does a group of anything
21:47:15 <vipul> so then why even show them as separate resources
21:47:19 <kevinconway> i can give a group access to a think without giving access to each individual
21:47:26 <esp> where as a user group is kinda a collection of individual things
21:47:33 <SlickNik> for users, having them in a user group doesn't mean that certain user operations now need to be done on the user group.
21:48:10 <SlickNik> but for instances, if they are part of a cluster, all mysql operations that were previously on the instance now have to be done on the cluster.
21:48:38 <SlickNik> …or maybe not…just thinking out loud here...
21:48:58 <kevinconway> if thats the case then what about a 301 redirect to the head node when you try to sql on a slave node
21:49:10 <kevinconway> and are we talking master/slave with only one master?
21:49:30 <imsplitbit> we're talking about replication/clustering in the general sense
21:49:51 <imsplitbit> because this api must facilitate doing mongodb replication or even redis or postgres
21:49:58 <SlickNik> I think this topic needs more discussion. :)
21:50:04 <imsplitbit> agree
21:50:05 <kevinconway> a master/master should allow me to interact with any node and have those changes replicated
21:50:05 <vipul> or even galera
21:50:17 <imsplitbit> I would love/welcome much much more discussion
21:50:30 <SlickNik> which may be out of the scope of this week's meeting.
21:50:36 <vipul> regarding clustertypes, what was our proposal for service_types?
21:50:53 <vipul> we had a spec somewhere where we introduced that
21:50:55 <trierra_> e;t
21:51:09 <vipul> ?
21:51:12 <imsplitbit> vipul: link?
21:51:17 <trierra_> sorry
21:51:22 <imsplitbit> I don't recall seeing that
21:52:04 <vipul> #link https://wiki.openstack.org/wiki/Reddwarf-versions-types
21:52:07 <vipul> i think that's the one
21:52:22 <SlickNik> Let's take further discussion on this to #openstack-trove...
21:52:23 <imsplitbit> SlickNik: if the discussion is outside the scope of this meeting I'd love to setup a time to get everyone together and discuss further
21:52:31 <SlickNik> imsplitbit: agreed
21:53:03 <vipul> yea we can find a slot in openstack-trove
21:53:58 <vipul> movin' on then?
21:54:03 <SlickNik> #imsplitbit, SlickNik, vipul and others to discuss the replication and clustering API
21:54:17 <SlickNik> irc://15.185.114.44:5000/#imsplitbit, SlickNik, vipul and others to discuss the replication and clustering API
21:54:20 <vipul> lol
21:54:23 <SlickNik> lol
21:54:26 <vipul> #action imsplitbit, SlickNik, vipul and others to discuss the replication and clustering API
21:54:27 <SlickNik> trying to action it
21:54:28 <imsplitbit> woops
21:54:29 <SlickNik> thanks!
21:54:30 <imsplitbit> :)
21:54:33 <SlickNik> moving on
21:54:43 <esp> phew!
21:54:56 <SlickNik> #topic Next meeting time
21:55:12 <SlickNik> #link http://doodle.com/fvpxvyxhmc69w6s9erhvvpt4/admin#table
21:55:13 <kevinconway> didn't we spend the first half-hour on this topic?
21:55:23 <SlickNik> ^^ Vote soon. Poll closes end of week.
21:55:29 <SlickNik> yes we already covered it.
21:55:46 <SlickNik> #topic reddwarf -> trove move.
21:56:23 <SlickNik> So we've changed our repos already.
21:56:53 <vipul> what's the status btw? just code renames need to happen?
21:57:06 <vipul> hub_cap: ^^
21:57:11 <SlickNik> hub_cap was working on changing our codebase so that any references to reddwarf are now trove.
21:57:44 <SlickNik> I hope he's not lost his electricity again. :(
21:57:58 <hub_cap> im here sry
21:58:05 <hub_cap> im also talking in #openstack-meeting
21:58:13 <SlickNik> ah, okay.
21:58:14 <hub_cap> CUZ THEY ARE DURING THE SAME TIME!!!!
21:58:18 <hub_cap> skip and come back
21:58:23 <SlickNik> okay.
21:58:31 <SlickNik> move this to open discussion.
21:58:39 <SlickNik> #topic API validation.
21:58:51 <SlickNik> juice, any updates for us?
21:59:08 <juice> removed all the validation code
21:59:15 <juice> tox passes
21:59:24 <juice> running int tests as we speak
21:59:29 <juice> review should land today
22:00:00 <juice> I unlock jsonschema achievement -
22:00:10 <SlickNik> okay, glad that you were able to figure out the jsonschema bits.
22:00:10 <juice> which gets you...
22:00:15 <SlickNik> thanks for that!
22:00:32 <SlickNik> +100 trove gratitude. :)
22:00:40 <juice> with the size of some of the schemas
22:00:51 <juice> I am not sure if the codebase actually grew or shrank
22:01:03 <SlickNik> heh...
22:01:09 <SlickNik> …and that brings us to...
22:01:15 <SlickNik> #topic open discussion
22:01:19 <juice> not going to do it on this pass but would like in the near future to modularize the schemas
22:01:32 <vipul> nice work juice
22:01:52 <SlickNik> Anything else to discuss?
22:02:09 <SlickNik> (other than the status of the rename.)
22:02:56 <SlickNik> ...
22:03:03 <SlickNik> I guess not.
22:03:21 <SlickNik> hub_cap take it away.
22:03:27 <juice> I would like to discuss my fresh feelings about the api stuff - as in when can we correct them to make them more restful
22:03:33 <hub_cap> HEYO
22:03:43 <juice> but I don't know if it's urgent
22:03:52 <juice> just planting the seed right now
22:03:52 <esp> juice: nooo!!
22:04:02 <vipul> we need to think of doing that for v2 api...
22:04:08 <vipul> which may be w/clustering?
22:04:12 <juice> we should do this when we move away from wsgi
22:04:33 <hub_cap> ok so should i talk status of rename real quick?
22:04:50 <kevinconway> move away from wsgi? as in the wsgi interface or the wsgi module from openstack common?
22:05:05 <juice> at least the latter
22:05:19 <SlickNik> Go for it hub_cap.
22:05:30 <hub_cap> ok so troveclient is renamed, it failed jenkins for some reason ill look into it
22:05:37 <hub_cap> we might ahve to merge it so i can make progress w/ reddwarf
22:05:44 <hub_cap> but i anticipate ~24hrs itll be done
22:05:50 <hub_cap> i officially have 2 working recepticals now
22:05:52 <hub_cap> so ill be good to go
22:05:52 <vipul> speaking of which --- can we release to pypi afterwards?
22:06:03 <hub_cap> thats a good idea
22:06:13 <hub_cap> i dont control that i think grapex does
22:06:21 <grapex> hub_cap: Not so
22:06:25 <vipul> IIRC it was based on a tag
22:06:30 <grapex> I do have access to the repo, but so does mordred
22:06:43 <hub_cap> ok ill ask mordred to push it
22:06:45 <grapex> vipul: That was it.
22:06:46 <SlickNik> grapex: did we build it around the ci-infra tagging?
22:06:57 <hub_cap> if the unit tests work is everyone ok w/ me pushing the code for the client rename?
22:07:02 <grapex> SlickNik: I'm not sure
22:07:19 <vipul> yea that's what it was... push a tag up to gerrit, and it will push to pypi
22:07:23 <SlickNik> grapex: okay, I can look into that.
22:07:52 <vipul> hub_cap: yea, if things worky let's push it
22:07:58 <SlickNik> #action SlickNik look into publishing to pypi based on tags.
22:08:12 <grapex> On that note
22:08:19 <grapex> Now that we've got more ci-infra support
22:08:23 <grapex> can we look at generating docs?
22:08:40 <grapex> The client actually had some, though they worked as tests which turned out to not be a great idea
22:08:42 <SlickNik> hub_cap: I'm fine with that
22:09:14 <grapex> We could change the docs to not run as PyTests and generate it though- has anyone heard about how pushing docs to PyPi works with the new CI infra stuff?
22:09:39 <grapex> hub_cap: Go nuts... although maybe we should check it in after we have a pull request ready to change the tests.
22:10:27 <SlickNik> grapex: I'm not sure about the docs. Someone will need to look into it.
22:11:17 <SlickNik> And that was the status.
22:11:22 <SlickNik> Anything else?
22:11:44 <SlickNik> I think we might be all done with the meeting, otherwise...
22:12:08 <SlickNik> going once.
22:12:12 <mordred> what?
22:12:36 <SlickNik> mordred, did you have something you wanted to bring up?
22:12:46 * mordred just saw my name get pinged
22:13:17 <SlickNik> oh, it came up in the context of publishing python-troveclient to pypi...
22:13:27 <SlickNik> and you having the creds to do so.
22:13:35 <mordred> cool. so that should work just by pushing a tag to gerrit
22:13:45 <SlickNik> yes, I was going to check on that.
22:13:58 <SlickNik> And update the ci-infra scripts if that's not in place already.
22:14:09 <mordred> I believe we should be up to date
22:14:15 <SlickNik> okay cool!
22:14:19 <mordred> main thing to check is python-troveclient itself on pypi
22:14:42 <mordred> which corvus setup, so it likely has openstackci added to it properly
22:14:47 <mordred> so should work!
22:15:09 <SlickNik> awesome.
22:15:19 <hub_cap> grapex: change the tests?
22:15:58 <grapex> hub_cap: Yeah, we should just rip the DocTest stuff out and leave the docs
22:16:13 <grapex> They were pretty useful to a few ops at Rackspace, though there's close to nothing in them.
22:16:25 <hub_cap> ah
22:16:38 <SlickNik> Sweet. I think we're done.
22:16:42 <SlickNik> Thanks everyone!
22:16:48 <SlickNik> #endmeeting