18:00:25 <SlickNik> #startmeeting trove-bp-meeting
18:00:26 <openstack> Meeting started Mon Oct 27 18:00:25 2014 UTC and is due to finish in 60 minutes.  The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:29 <openstack> The meeting name has been set to 'trove_bp_meeting'
18:00:35 <dougshelley66> o/
18:00:41 <amrith> ./
18:00:54 <georgelorch> o/
18:00:57 <SlickNik> Agenda at:
18:00:59 <SlickNik> #link https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting
18:01:02 <grapex> o/
18:01:05 <amcrn> o/
18:02:51 <SlickNik> Looks like we have a full agenda this morning, so let's get started.
18:02:58 <SlickNik> #topic Add RAM, Cores, and Volume Count to Quotas
18:03:13 <SlickNik> #link https://review.openstack.org/#/c/129734/
18:05:44 <SlickNik> amcrn: Anything to add?
18:06:03 <amcrn> SlickNik: not really, the crux of it is in the blueprint, and it's fairly to the point as-is.
18:06:34 <amrith> amcrn, would you add a section on how you'd expect people to set these values
18:06:46 <amcrn> amrith: not sure i follow?
18:06:46 <amrith> or since I assume you are using this already, provide some examples from your usage.
18:06:51 <SlickNik> amcrn: I had a question, but I can add it to spec. It's regarding whether the API call will expose these new limits?
18:07:24 <SlickNik> Looks fairly straightforward.
18:07:25 <amcrn> SlickNik: oh, good point. it does, i'll make sure to amend the spec to include that. thanks!
18:08:03 <SlickNik> Any other question on this?
18:08:03 <iccha2> is the maximum bumber of volumes, just the number of volumes or is it in terms of MB as well?
18:08:21 <amcrn> iccha2: the size check already exists as a quota policy, this just checks the # of volumes.
18:08:46 <iccha2> ok makes sense
18:10:06 <amrith> amcrn, would you be able to add sample numbers
18:10:14 <amrith> to the questions I had in the review
18:10:27 <amrith> I'm not entirely how to set these numbers other than the defaults (which are basically wide open).
18:10:45 <amrith> I'm assuming that they have to be less than the corresponding Nova limits
18:10:48 <amcrn> amrith: it's up to the deployer. cores could be set equal to the # of instances, or whatever.
18:10:51 <amrith> but the question is, by how much?
18:10:58 <amcrn> depends on what type of flavors you support
18:11:16 <amrith> ok, so the implementation is basically a further protection
18:11:24 <amrith> over and above what Nova and other projects already support.
18:11:26 <amrith> sounds good.
18:11:27 <amrith> thx
18:11:29 <amcrn> np
18:11:50 <johnma> this is related to what SlickNik just mentioned about the api calls but I am assuming we should be able to update these values through the api as well
18:12:27 <amcrn> johnma: if the current code review doesn't allow that, and it works for the others, i'll make sure to fix that. so yes, the new quotas should be discoverable via quota-show, and should be able to be updated via mgmt-quota-update.
18:12:47 <amcrn> good point, thanks
18:13:02 <johnma> sure, sounds good, thanks
18:13:30 <SlickNik> Okay, sounds good. Let's move on if there are no further questions.
18:13:44 <SlickNik> .
18:13:48 <amrith> what're we doing
18:13:51 <amrith> with respect to process
18:13:54 <amrith> just +1 the review?
18:14:20 <SlickNik> amrith: We're going to let folks look at the spec and +1/+2 it offline.
18:14:25 <amrith> thx
18:15:06 <amcrn> #topic Make Rsync For Guest Optional Spec
18:15:13 <amcrn> #link https://review.openstack.org/#/c/129740/
18:15:17 <SlickNik> irc://zncserver.dyndns.org:5000/#topic Make Rsync For Guest Optional
18:15:23 <SlickNik> whoops
18:15:37 <SlickNik> thanks amcrn
18:15:41 <amcrn> :)
18:16:42 <amcrn> this change is mostly symbolic. helps new deployers understand how a production image could be built.
18:18:05 * grapex awards amcrn 5 niceness points
18:18:08 <SlickNik> amcrn: +1. We've recently got a lot of questions similar to - "But how does the guest code make it to the instance"
18:18:16 <grapex> That's a pretty good idea since there seems to be so much confusion on that topic
18:18:26 <amcrn> grapex: can i get a gold star to put on the board as well?
18:18:53 <grapex> amcrn: No
18:18:56 * grapex has limits
18:18:57 <amcrn> :/
18:19:15 <johnma> so you mention that it requires setting  GUEST_LOCAL_TROVE_DIR  and GUEST_LOCAL_TROVE_CONF. is this to be set in the diskimage-builder element and if so is there a default value or something like that
18:19:40 <amcrn> johnma: well, if you're using redstack, you can set it in redstack.rc (like so https://review.openstack.org/#/c/119488/5/scripts/redstack.rc)
18:19:51 <amcrn> but otherwise, yes, you'd have to set it before the diskimage-builder create call
18:20:02 <amcrn> (see https://review.openstack.org/#/c/119488/5/scripts/functions_qemu)
18:21:04 <johnma> aah ok, I should have gone through the work item. thanks amcrn
18:21:12 <amcrn> johnma: no worries, good question.
18:22:17 <SlickNik> So if the files exist on the image, does the rsync still happen and fail (or do we need to change it to stop the rsync from occurring in that case?)
18:22:23 <johnma> will we document this somewhere?
18:22:29 <amcrn> SlickNik: the rsync does not happen anymore
18:22:52 <SlickNik> Okay — sounds good.
18:23:05 <SlickNik> Looks good to me. Thanks amcrn
18:23:06 <amcrn> SlickNik: mostly because https://github.com/openstack/trove-integration/blob/master/scripts/files/trove-guest.upstart.conf#L20 is satisfied
18:23:21 <SlickNik> Ah, that explains it
18:24:29 <SlickNik> Okay, let's move on.
18:24:38 <SlickNik> #topic Enance Mgmt-Show To Support Deleted Instances
18:24:49 <SlickNik> #link https://review.openstack.org/#/c/129752/
18:27:29 <grapex> I was honestly surprised the other day when someone told me this wasn't possible
18:27:34 <grapex> I could've sworn it used to be
18:28:11 <SlickNik> Yeah, I'm surprised that this is broken too.
18:28:14 <iccha2> why do we need the query parameter? why cant we just show the instance irrespective of it is deleted or not since its just a show call?
18:28:18 <SlickNik> Do we even need an extra parameter (i.e. deleted=true) on the GET call?
18:28:31 <amcrn> i added the extra parameter on the GET to be backward compatible
18:28:32 <iccha2> haha same point SlickNik
18:28:35 <grapex> iccha2: Good point
18:28:44 <grapex> amrcn: Hmmm...
18:29:00 <amcrn> it's quite possible someone has tooling that relies on an error being thrown if the instances is deleted
18:29:04 <grapex> I guess some people might rely on getting an error for a deleted instance
18:29:05 <amcrn> instance*
18:29:13 <grapex> amcrn: makes sense
18:29:19 <iccha2> hmm ya i can see that.
18:29:55 <grapex> Remember kids, don't let bugs get into your API. Otherwise you end up having to support them forever as legacy behavior. :p
18:30:12 * amcrn hums the G.I. Joe tune
18:30:19 <SlickNik> fair point.
18:30:20 <grapex> Too bad we never had a test that showed a deleted instance.
18:30:57 <SlickNik> Well, we could have just as easily had one that did, and expected an error.
18:31:01 <SlickNik> So no help there.
18:31:29 <grapex> SlickNik: Your right, it depends on however the person who wrote the test felt that day
18:32:25 <edmondk> As one point about query parameters the get should always show everything and the query parameter should just be doing filtering on the get
18:33:06 <edmondk> not sure if that's how trove always uses query parameters just my two cents
18:33:17 <amcrn> edmondk: this would definitely be interesting to bring up to the newly formed api committee, because there's quite a few openstack apis that rely on the "GET only shows active unless --deleted true is provided"
18:33:41 <amcrn> not sure the best practices on it
18:34:49 <amcrn> edmondk: fwiw, the current pattern is never show deleted unless there's a parameter explicitly saying to. it's fairly consistent across nova + cinder + glance + etc.
18:35:55 <edmondk> amcrn: That works as well as long as the API is consistent
18:36:02 <SlickNik> Yeah, FWIW there's also the "all-tenants" query param in OpenStack APIs (for admins) that doesn't follow the same pattern
18:36:30 <amcrn> SlickNik: all-tenants is frustrating, because you usually want to target a specific one, and you can't
18:36:46 <amcrn> but that's another point altogether
18:36:54 <SlickNik> Yeah.
18:37:03 <SlickNik> Okay, any other questions regarding this one?
18:37:21 <SlickNik> . Let's move on
18:37:28 <SlickNik> #topic Guest Agent RPC Ping Pong Mgmt AP
18:37:36 <SlickNik> #link https://review.openstack.org/#/c/130392/
18:42:36 <SlickNik> amcrn: What's the use case for this?
18:42:54 <amcrn> SlickNik: this has been very handy when triaging rabbitmq issues
18:43:03 <amcrn> client hangups (zombie connections), etc.
18:44:03 <amcrn> in short, if you look at your guestagent.<uuid> topics for non-draining, then rpc_ping, then mgmt-show, it provides a holistic picture as to what's going on.
18:44:39 <edmondk> Main use case then is for monitoring/heart beat then?
18:45:02 <amcrn> i'd say it's purpose is the same as mgmt-show, except far more limited
18:45:55 <amcrn> mgmt-show isn't useful in many scenarios, because you can't distinguish between the volume disappearing or the guestagent not being reachable via rpc (since get_filesystem_info is what returns the volume information)
18:46:17 <SlickNik> My one concern here would be that we're adding a mgmt API for a use-case that's very developer / debugging centric.
18:47:29 <SlickNik> But one can argue that that's what part of the mgmt APIs should be for.
18:48:44 <amcrn> SlickNik: yeah, it's definitely a bit grey, but i couldn't think of an alternative for operators.
18:50:14 <SlickNik> edmondk: I wouldn't  think you'd want to build a monitoring solution on top of this — heartbeats would probably be a better alternative for that.
18:51:06 <SlickNik> edmondk: Seems to me more of a tool / method to detect and diagnose amqp issues.
18:51:52 <SlickNik> amcrn: Other than that, looks fairly straightforward.
18:52:12 <amcrn> SlickNik: cool, appreciate the feedback
18:52:27 <SlickNik> Anyone else have any more questions regarding this?
18:53:13 <SlickNik> That's all we have this morning.
18:53:18 <SlickNik> #endmeeting