20:02:28 #startmeeting log_wg 20:02:28 ^1^! 20:02:29 Meeting started Wed Oct 7 20:02:28 2015 UTC and is due to finish in 60 minutes. The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:02:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:02:33 The meeting name has been set to 'log_wg' 20:03:08 so sorry I'm bit out of shape what do we have in agenda today? 20:03:15 hi 20:03:29 general info: wanted to make the group aware of an osops repository: http://git.openstack.org/cgit/openstack/osops-tools-logging/ 20:04:12 I haven't had time to put together an agenda in ages. 20:04:13 #info wanted to make the group aware of an osops repository: #link http://git.openstack.org/cgit/openstack/osops-tools-logging/ 20:04:53 me neither, sorry that's why I thought that I take topics on fly if people have something specific they want to bring up 20:05:12 if not I just set it to open 20:05:23 We need a chapter in the admin guide on log configuration 20:05:32 I still haven't talked to Lana about it. 20:05:49 the docs IRC has been real quiet when I'm out there. 20:05:51 or Rockyg is there something around the osops you wanna bring up other than the link? 20:06:05 Need to send an email 20:06:40 osops meets just before us. They have a launchpad site, etc. They are creating bps and bugs for ops tools 20:07:17 ok 20:07:32 there could be synergy there for some of our stuff 20:07:52 sounds good 20:08:34 in our summit sessions we should let everyone know about the logging repo. Maybe we can get more scripts, etc in there and find a common thread to tackle for logging 20:09:24 logging repo? you mean oslo.log? 20:09:40 nope. The link above 20:09:52 oh ... tl;dr :P 20:10:03 didn't get to the last word of the link yet :P 20:10:21 more coffee!!! 20:10:39 yeah ... or few hours of sleep every now and then ;) 20:11:44 question for both jokke_ and bknudson Are there debug messages in keystone and glance that should be converted to error warning or info? 20:13:07 Rockyg: I don't think in Glance ... took me a cycle to go through those and I have tried to keep my eye for those in the reviews 20:13:18 might have missed some but nothing major at least 20:13:40 but there is one thing that is seriously starting to drive me crazy 20:14:09 I'm listening.... 20:14:13 that is the agreement that every wsgi request gets logged in info and we do that for healthcheck middleware as well 20:15:00 if you run logging on INFO level we generate over a gig of logs per day even when there is absolutely nothing else happening in the cluster than HAProxy pings 20:15:48 a gig just from glance???? 20:15:57 WTF 20:15:59 from both services API and Registry 20:16:37 and that applies on any other service as well which would be utilizing the oslo.middleware.healthcheck and being behind HAProxy 20:17:33 so I try to find time to figure out how to stop that and write a spec to move those logs to DEBUG 20:17:39 so, at least one isue is the oslo.middleware.healthcheck INFO level? 20:18:44 I don't think it's oslo.middleware logging it but rather the wsgi implementations ... or I'm not yet sure if that comes via the context, oslo.log or what actually logs those requests for us 20:19:23 but every single request gets logged on INFO and that includes healthchecks 20:20:10 ping Doug and ask him. He knows a bunch of that stuff and is usually around oslo. Actually, just ask on the #openstack-oslo IRC channel 20:20:41 That's a bug in my book. 20:20:50 yup ... when I get time to dig into that ... hopefully that will be before the Summit 20:21:10 or if not I figure out session there to bring it up ;) 20:21:46 in keystone we're switching to running in apache, at which point the apache access logs would serve the purpose. 20:22:14 as in, it's not a keystone thing anymore that's logging access it's apache or whatever the container is. 20:22:16 you know, Jay Pipes has a bug up his arse about wski and notifications. You might ping him..... 20:22:42 does jay pipes ever not have a bug up his arse 20:23:24 ummmm 20:23:57 I'm not that familiar with Jay's arse, nor I want to be to know 20:24:23 I think they named it wsgi and pronounce it whiskey because you need a shot every time you have to deal with it 20:24:33 tru dat! 20:25:06 it's like we call the glance domain layer system onion layers because it gets you into tears every time you try to cut through it 20:25:32 ah! 20:26:08 another question for logging, guys.... 20:26:23 shoot 20:26:26 Lots of talk on lots of lists about api versions 20:26:38 is the version part of the log messages? 20:26:48 at least with wsgi yes 20:27:11 as the endpoint where the request got sent includes the version 20:27:53 cool 20:29:04 so, if a cloud claims they use glance v2 for everything, will the log message show that glance v1 was used when the border is crossed from nova? 20:29:43 well we know that nova calls only Images API v1 20:30:05 regardless if it's nova itself or someone using the Nova Image API 20:30:30 as Nova does not (yet) have code in place to talk to Images API v2 20:31:26 are there any glance v2 apis that flow through nova and/or cinder that require another glance api call? something like glance v2->nova--> glance v1? 20:31:33 so Clouds can expose only v2 for users but Nova will need access to v1 to work 20:31:44 Rockyg: no 20:31:49 Ah. Cool. 20:32:33 so users can be 100% v2 and since there are no external apis for glance v1, it's all under the covers 20:32:43 Glance does not call anything from Nova and not really from Cinder either as the cinder driver is not really in usable shape still 20:33:59 well there is no intended external usage for v1 but it's exposed on some clouds to users as external api 20:34:33 So glance is pretty much the endpoint for all glance stuff, but other projects call glance? 20:35:05 glance Images API or nova image api 20:35:11 I would love a real architectural diagram of flows that isn't in ascii diag :P 20:35:40 I need to read the docs. 20:36:06 so big problem moving Nova to consume Images API v2 is that nova image api (which is the proxy api) depends on certain things in v1 that does not exist in v2 20:36:52 OK. I knew there was an issue, but it's nova<->glance, not user<->glance 20:37:03 mostly 20:37:05 and the issue with cinder? 20:38:16 I really don't know anything else than that Mike had some issues ... my crystal ball was booting at that time so I didn't get the notification before the rampage in the mailing list which wasn't any way constructive 20:39:33 Ah. Ok. 20:40:09 And keystone v2/v3 what are those issues bknudson? 20:40:33 Rockyg: I don't think there's any issues with v2/v3 anymore. 20:40:41 jokke_, also, does manila interact with glance that you're aware of? 20:40:51 devstack was changed to use v3 exclusively 20:40:59 and the goal is to deprecate v2 20:40:59 Rockyg: not that I know 20:41:11 bknudson, then it's just how to do the deprecation in defcore vs releases 20:41:31 defcore shouldn't have anything for v2. 20:41:41 considering v3 has been available for many releases 20:42:07 bknudson: that doesn't mean that defcore wouldn't be relying on it :P 20:42:18 bknudson: keystone v2/v3 was at the advice of keystone ptl 20:42:27 yeah. right now, they are talking about requiring both for a bit. But that doesn't make sense, either. 20:42:30 bknudson: that weighed heavily in that decision 20:43:03 if we have v2 in there it's hopefully only the token stuff 20:43:05 Defcore covers juno, kilo, liberty in the next version 20:43:08 getting a token and validating a token 20:43:27 high hogepodge! 20:43:36 oops. Hi! 20:44:19 would a user have any interaction with v2 in juno, I guess the question should be. 20:45:04 we just added the v3 test to devstack and I think juno had some problems with v3 20:45:42 iirc Glance just moved to even support v3 not so long ago :( 20:46:02 what did glance have to do to support v3? 20:46:05 Ah. Then that's the issue. 20:46:07 the cli? 20:47:58 I think this cycle, we need to focus on advertising API changes to API clients. Otherwise, the old ones will never get removed and the new ones will never be fully accepted. 20:48:59 bknudson: I can't remember, I think it was the client 20:49:01 And what the hell does this have to do with logging except maybe a filter on the gate test logs to see whether deprecated apis are still in use in the testing 20:49:26 Clients seem to be the weak link in all of this. 20:49:38 oslo_log.versionutils has a setting where you can make deprecated use raise exception rather than log a warning 20:49:50 I just remember that it wasn't all that long time ago I saw patch to unbreak after we had started to utilize the keystone v3 api 20:50:21 Rockyg: you have good point there, but you started the discussion ;P 20:51:08 Yeah, I know. I'm gonna take it up with Matt Treinish at some point, so it's good info and how to make the logs more useful for this stuff.... 20:52:57 So, that oslo_log.versionutils setting is something that should be documented for admins testing an upgrade 20:53:14 ok 20:53:28 more good info... 20:54:02 probably for monitoring, though. 20:54:20 probably 20:55:07 Sorry I'm all over the board today. Started antibiotics yesterday. sinus infection that needs to clear before I board the plane to Tokyo 20:55:26 ouch ... no worries 20:56:30 So, I really don't have anything else. Or at least can't remember it. I hope to have a doc with a summary of Ops issues/wishlist for the next meeting. Let you know. 20:57:15 ok last minutes ... anyone else? 20:58:32 ok, thanks all 20:58:35 #endmeeting