18:02:03 <SlickNik> #startmeeting trove-bp-review
18:02:04 <openstack> Meeting started Mon Sep 22 18:02:03 2014 UTC and is due to finish in 60 minutes.  The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:02:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:02:08 <openstack> The meeting name has been set to 'trove_bp_review'
18:02:21 <SlickNik> Just giving folks a couple of minutes to trickle in.
18:02:23 <zigo> denis_makogon: Ok, cool. I'll wait for your patch then.
18:02:25 <amrith> ./
18:02:31 <denis_makogon> o/
18:02:53 <zhiyan> SlickNik: sorry again, last week I didn't attend the meeting.
18:03:06 <esmute> o/
18:03:10 <SlickNik> zhiyan: no worries — you're here this week.
18:03:14 <zhiyan> o/
18:03:18 <annashen> o/
18:03:21 <grapex> o/
18:03:22 <SlickNik> Agenda at:
18:03:23 <schang> o/
18:03:25 <SlickNik> #link https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting
18:05:11 <SlickNik> #topic Cassandra Clustering
18:05:15 <SlickNik> #link https://blueprints.launchpad.net/trove/+spec/cassandra-cluster
18:05:57 <denis_makogon> i wrote spec at trove-specs as was requested, add details (with respect to mongodb clustering spec)
18:06:53 <dougshelley66> o/
18:07:13 <peterstac> o/
18:08:36 <SlickNik> denis_makogon: It looks to me like the spec is still pretty rough around the edges — it seems very similar to the Mongo spec (referring to shards — which I don't think exist in Cassandra).
18:09:31 <denis_makogon> SlickNik, please leave comments where i've wrote about sharding in C*
18:10:13 <SlickNik> Okay will do.
18:10:31 <amrith> SlickNik, I've already annotated that particular point.
18:11:03 <amrith> denis_makogon, is this intended to be Cassandra clustering (v1) or Cassandra Clustering (everything)
18:11:16 <SlickNik> denis_makogon: I had a couple of comments from last week — "Are we planning on doing replication? How are Snitches handled?"
18:11:24 <denis_makogon> amrith, with respect to you, most of comment, because of links with INFO weren't reviewed
18:11:41 <amrith> I don't follow the comment
18:12:54 <denis_makogon> SlickNik, you can't setup replication for Cassandra, it's done by the default, you can only use clustering option, ideal snitching mechanis was enabled in cassandra by the default, see snitch links at spec
18:13:50 <denis_makogon> amrith, most of you comments that were left at the spec were made because you didn't look at links that i've provided for each section
18:14:19 <amrith> actually I did.
18:15:11 <SlickNik> hrm: denis_makogon — the replicas in the cassandra cluster use a replica placement strategy, and afaik that needs to be configured.
18:15:39 <amrith> SlickNik, you are correct.
18:15:46 <denis_makogon> SlickNik, it depends on how many instances are being deployed
18:16:08 <amrith> I think they let you choose not just strategies but also factors.
18:17:13 <SlickNik> Anyhow denis_makogon: I think we need to put a lot more thought and work into this spec — so this is still heavily work in progress, correct?
18:17:45 <SlickNik> I will put some of my comments into the spec rst as well.
18:17:52 <denis_makogon> sure
18:18:39 <SlickNik> Okay. Thanks!
18:18:47 <SlickNik> zhiyan: around?
18:18:56 <zhiyan> SlickNik: yes
18:19:16 <SlickNik> #topic OSProfiler integration
18:19:22 <SlickNik> #link https://review.openstack.org/#/c/103825/
18:19:49 <zhiyan> first of first, OSProfiler is an OpenStack cross-project profiling library: https://github.com/stackforge/osprofiler
18:20:24 <zhiyan> boris-42 prepared the common BP in oslo-spec due to integration change is similar to all projects.
18:20:40 <zhiyan> Currently there are 2 other projects are using osprofiler: Glance & Cinder, and some others are working in progress, e.g. the one for heat https://review.openstack.org/#/c/118115/
18:21:21 <zhiyan> there's a doc to introduce heat one http://ahsalkeld.wordpress.com/2014/09/04/how-to-profile-heat-using-osprofile/ , ify
18:22:51 <SlickNik> zhiyan: looks like good stuff.
18:22:53 <zhiyan> I really like to see Trove has this capability as well, so weeks ago, I had prepared two changes for trove and troveclient https://review.openstack.org/#/c/116653/ , https://review.openstack.org/#/c/116654/
18:23:32 <SlickNik> zhiyan: I'm a bit concerned about why the spec hasn't merged in oslo yet — can you speak to that?
18:23:50 <zhiyan> SlickNik: absolutely, I'm sure it is.
18:24:37 <zhiyan> SlickNik: most people are on the same page now, but a few folks just worried on the program itself. afaik
18:25:25 <zhiyan> SlickNik: I didn't keep eye to keep look on the comment up-to-date.
18:25:37 <SlickNik> zhiyan: Yes, reading through the comments it looks like most folks are concerned about whether oslo is the right spot for cross-project specs.
18:26:15 <zhiyan> SlickNik: =) . There's a clear result page as an example, http://boris-42.github.io/ngk.html
18:27:06 <zhiyan> with my above two changes, Trove could provides it to operator/developer as well =)
18:27:35 <zhiyan> and we could add more trace point as we needed in near feature. I can do that as well if needed.
18:28:21 <amrith> SlickNik, I have some questions, ... will wait ...
18:28:31 <SlickNik> amrith: go for it.
18:29:15 <amrith> thx, you addressed part of my first question re: oslo. The other half is this, how do we get 'changes' from osprofiler, is it just the dependency? or is there an oslo style code copying thing that we have to do from time to time?
18:29:42 <zhiyan> amrith: depdency
18:30:09 <zhiyan> amrith: https://github.com/openstack/requirements/blob/master/global-requirements.txt#L64
18:30:18 <amrith> ok., what other projects do we become dependent upon? for example, the spec says that data is written into ceilometer. is this the only dependency?
18:31:04 <zhiyan> amrith: good question. osprofiler support different 'backend' to store trace data.
18:31:20 <amrith> please elaborate.
18:31:39 <zhiyan> now there's (only) one built-in implementation, it's ceilometer..
18:32:01 <zhiyan> https://github.com/stackforge/osprofiler/tree/master/osprofiler/_notifiers
18:32:42 <zhiyan> and https://github.com/stackforge/osprofiler/tree/master/osprofiler/parsers (for result showing stuff)
18:33:05 <amrith> zhiyan, maybe I should restate my question.
18:33:20 <amrith> the links you shared don't establish a dependency on ceilometer for the trove project
18:33:35 <amrith> i.e. if I don't want to enable osprofiler, I'm not obligated to run ceilometer
18:33:47 <amrith> however, if I want to use profiler, I must use ceilometer (now), is that correct?
18:33:59 <zhiyan> amrith: so for the question: now ceilometer is a 'soft' dependency if we enabled profiling in Trove. (and trigger it)
18:34:05 <denis_makogon> amrith, correct, you just need to have queue for traces
18:34:06 <denis_makogon> that a;;
18:34:08 <denis_makogon> all
18:34:21 <amrith> zhiyan, is there a dependency on oslo.messaging?
18:34:35 <SlickNik> zhiyan: what do you mean by 'soft dependency'?
18:34:46 <zhiyan> amrith: there's no dependency on oslo.messaging.
18:35:37 <zhiyan> ok, sorry. SlickNik 'soft' mean, if we enabled profiling in Trove, and trigger it with a request, then Trove will send trace data to ceilometer as 'backend'.
18:36:22 <boris-42> zhiyan hi there
18:36:25 <zhiyan> due to currently osprofiler could only drive ceilometer backend
18:36:34 <zhiyan> boris-42: hey =)
18:36:55 <amrith> zhiyan, what is the overhead of this code, when it is disabled? I see that it instruments every API call with a call to osprofiler.profiler.get().
18:36:59 <boris-42> zhiyan btw as soon as I know there will be support for stacktach as well
18:37:05 <boris-42> amrith there is no overhead
18:37:07 <amrith> sorry, will wait till boris-42 and you address previus question
18:37:27 <zhiyan> boris-42: good to know. yes, there's no overhead if we disabled it
18:37:30 <amrith> I'll buy "little overhead" but "NO overhead" seems hard to comprehend.
18:37:32 <boris-42> amrith so basically there is no overhead even if profiler is turned on (but not triggered)
18:37:52 <boris-42> amrith so every call will be sometihine like "if not None"
18:38:01 <boris-42> amrith in my world it's no overhead
18:38:29 <amrith> considering this is a profiler, I assumed that Heisenberg would be remembered ;)
18:38:49 <amrith> anyhow, there was a mention of tracepoints and you mention "triggering", how does that work?
18:38:58 <amrith> is there something more that one needs to do to utilize this?
18:39:07 <amrith> which brings up the question of how to document this profiler for trove
18:39:11 <boris-42> amrith so
18:39:14 <amrith> which is a service unlike others
18:39:19 <boris-42> amrith so
18:39:25 <boris-42> amrith please read this
18:39:25 <amrith> we have a set of 'services' on the control side and one on the guestagent side.
18:39:37 <amrith> <waiting>
18:39:45 <boris-42> amrith https://github.com/stackforge/osprofiler/blob/master/README.rst
18:39:49 <amrith> <thx>
18:39:55 <boris-42> amrith I am not sure that I will be able to describe it better then here
18:39:59 <boris-42> amrith espeically in IRC
18:40:08 <amrith> np, I will read this link. thanks!
18:40:09 <boris-42> amrith if you have any questions about this README just ping me
18:40:14 <amrith> wilco
18:40:17 <boris-42> amrith great
18:40:28 <boris-42> amrith basically triggering happens when you run "profiler.init()"
18:40:29 <amrith> SlickNik, I'm done. I have homework
18:40:40 <boris-42> amrith this happens inside wsgie middleware
18:40:52 <zhiyan> for http calls
18:40:59 <boris-42> amrith https://github.com/stackforge/osprofiler/blob/master/osprofiler/web.py#L108
18:41:19 <boris-42> amrith is if there is specially harder, that is signed by key that is specified in api-paste.ini
18:41:28 <boris-42> amrith then profiler.init() will happen
18:41:40 <boris-42> amrith and profiler is triggered and it will create overhead=)
18:41:49 <boris-42> amrith otherwise no overhead
18:42:13 <SlickNik> Sounds good just a couple of implementation related clarifications: So the part that's actually sending the trace data over is the osprofiler dependency? And the ceilometer endpoint is configured there?
18:42:14 <SlickNik> So is all that's exposed in trove a couple of switches to turn profiling on / off?
18:42:46 <boris-42> SlickNik hm about "backend" for osprofiler
18:43:00 <boris-42> SlickNik it can be everything, but we decided to use ceilometer as a first implementation
18:43:10 <boris-42> SlickNik cause it's simplest way to get this working in gates
18:43:16 <boris-42> SlickNik and by devstack
18:43:38 <boris-42> SlickNik about turning off / on you can turn it off by a lot of ways
18:43:52 <zhiyan> btw, I prepared a change for devstack as well, to help setup things for Trove osprofiler.
18:44:05 <SlickNik> zhiyan, link please?
18:44:06 <boris-42> SlickNik you can do it from code, from api-paste.ini
18:44:23 <zhiyan> in the change #116653, I added an option fro Trove.
18:44:54 <zhiyan> SlickNik: https://review.openstack.org/#/c/116671/
18:44:57 <SlickNik> boris-42: Okay, got it.
18:45:18 <zhiyan> SlickNik: actually i mentioned all related patches in #116653 commit msg =)
18:45:20 <boris-42> SlickNik so actually the idea is to have production ready profiler
18:45:42 <boris-42> SlickNik e..g. if something went wrong in production, you are able (as a admin) to trigger it and analyze what works slow
18:45:47 <SlickNik> So definitely think this is something useful.
18:46:00 <boris-42> SlickNik ya and it's cross service project
18:46:16 <boris-42> SlickNik so you can get something like http://boris-42.github.io/ngk.html
18:46:20 <boris-42> SlickNik nova glance keystone
18:46:50 <SlickNik> I'm not too excited about the soft dependency on a single backend, but given that this is something that you guys are working on, I'm okay with
18:46:54 <SlickNik> it.
18:46:55 <zhiyan> SlickNik: yep, we will integrate more project, to give operator a 'whole' trace picture
18:47:23 <SlickNik> Other cores around who would like to weigh in?
18:47:24 <zhiyan> SlickNik: agreed, we'd like do. boris-42 ^
18:47:24 <amrith> one more question ...
18:47:42 <amrith> do we have to implement triggers (in various places in the code) to benefit from this?
18:47:42 <boris-42> SlickNik so yep we will depend only for first time on ceilometer
18:47:43 <zhiyan> I have one question about schedule..will wait.
18:48:04 <zhiyan> amrith: yes
18:48:09 <boris-42> amrith so you can put in any place
18:48:15 <boris-42> amrith that you think is interesting
18:48:19 <boris-42> amrith points
18:48:33 <zhiyan> amrith: #116653 is a initial patches. we can add more trace point in code as needed
18:48:34 <amrith> so this isn't the complete implementation, this is just the beginning of the integration (if you will)
18:48:36 <boris-42> amrith you are not limited by amount, cause they creates overhead only in case if profiler is trigered
18:48:50 <boris-42> amrith ya you guys should help us a bit=)
18:48:52 <amrith> over time we will have people adding profile trace points like we have debugging messages
18:48:57 <boris-42> amrith ya
18:49:00 <boris-42> amrith exactly
18:49:05 <amrith> yikes!
18:49:21 <zhiyan> amrith: but, for db/sql stuff osprofiler could trace all of them automatically now.
18:50:01 <amrith> yes but a lot of what we do is executing commands
18:51:01 <amrith> and in my observation, the db stuff is fast
18:51:01 <zhiyan> SlickNik: my minor schedule question is that do you think we still have chance to done this in Juno?
18:51:01 <amrith> some of the commands take a lot longer
18:53:00 <boris-42> amrith so basically you can trace whole classes
18:53:00 <boris-42> amrith and so on
18:53:00 <SlickNik> zhiyan: Probably not — right now the juno train has passed. :(
18:53:00 <boris-42> amrith and avoid adding too much of pointins
18:53:00 <georgelorch> oops o/ :)
18:53:01 <SlickNik> zhiyan: Only RC1 bugfixes to the project at this point.
18:53:01 <zhiyan> SlickNik: ok. fine.
18:53:01 <SlickNik> zhiyan: So Kilo-1 is what were looking at.
18:55:16 <SlickNik> Will approve the BP for K1. Thanks for putting this together zhiyan, and boris-42
18:55:27 <amrith> boris-42, zhiyan ... the sooner there is a non ceilometer backend the more useful it will become (just saying ;))
18:55:32 <zhiyan> SlickNik: thanks!
18:55:48 <zhiyan> amrith: reasonable ;)
18:56:04 <SlickNik> Okay — and we're almost out of time.
18:56:46 <SlickNik> The last BP on the agenda will take more than 5 minutes, so I'd recommend that folks take a look at the spec, and post your comments there.
18:57:01 <SlickNik> So we can hit the ground running next meeting.
18:57:01 <denis_makogon> that'll be awesome
18:57:22 <SlickNik> #link https://blueprints.launchpad.net/trove/+spec/oracle-db-over-fedora-20
18:57:45 <SlickNik> That's all for today, folks.
18:57:48 <SlickNik> #endmeeting