16:01:23 #startmeeting Storyboard 16:01:24 Meeting started Mon Jan 19 16:01:23 2015 UTC and is due to finish in 60 minutes. The chair is krotscheck. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:28 The meeting name has been set to 'storyboard' 16:01:31 o/ 16:01:48 o/ 16:02:09 So, before I post an agenda, what quorum do we actually need? 16:02:55 We’ve got 3/4 cores here. 16:02:58 That works for me. 16:03:03 Agenda: https://wiki.openstack.org/wiki/StoryBoard#Agenda 16:03:14 #topic Actions from last week 16:03:15 None! 16:03:23 hey 16:03:24 so 16:03:34 Hey! 16:03:36 it seems there's a bunch of people off today? 16:03:39 o/ 16:03:51 wasn't aware of US being bank holiday, what is it? 16:03:55 rcarrillocruz: There aren’t. ttx is on a plane, that’s the only person we’re missing. 16:04:05 i'm here, but lagging, builders are here doing things to the house 16:04:09 It’s Martin Luther King day in the U.S. 16:04:27 #topic Urgent Items: Truncated OAuth Tokens Work (rcarrillocruz) 16:04:46 ah ok 16:04:56 hi 16:05:03 was in wrong window... 16:05:06 rcarrillocruz: Any progress on that? 16:05:28 hi 16:06:14 nope, haven't looked at it yet, sorry 16:06:24 Alrightey, postponed to next week. 16:06:26 #topic User Feedback 16:06:29 Any new user feedback? 16:06:53 Honestly, the UI has been pretty stagnant. We’ve got a large backlog of UI issues and I’m still tied up with email :/ 16:07:20 my work allocation for our internal storyboard instance has dried up atm so I don't have any new feedback 16:07:28 No worryes, CTtpollard 16:07:30 So we’ll move on. 16:07:37 #topic Discussion Topics 16:07:46 Nothing on the agenda, anyone have something they want to discuss? 16:07:49 yeah 16:08:01 The floor is yours. 16:08:03 i wanted to show the deferred queue/replay events 16:08:21 so, rabbitmq has something called exchange to exchange binding 16:08:30 i.e., you can bind an exchange to another exchange 16:08:40 yep 16:08:54 so, in order to have a queue that logs all events (deferred queue for processing) and websocket clients the ability to replay events 16:09:00 we can do something like this: 16:09:01 http://stackoverflow.com/questions/6148381/rabbitmq-persistent-message-with-topic-exchange 16:09:11 (look at the diagram) 16:09:34 a fanout exchange that we push all events to, and that's abound to a queue that logs ALL events 16:10:00 and to another topic exchange, where we create on-demand queues with the given filters (i want just tasks, with id 1 for example) 16:10:07 what do you think? 16:10:28 if a websocket clients triggers 'replay history', we bind the client to the deferred queue 16:10:57 I like it. We already have a topic exchange, however the topic format won’t yet allow us to subscribe to specific events. 16:10:59 and upon binding, all events are pushed to the client. We could filter on the websocket client that only events meeting a date criteria is shown 16:11:05 All it filters on right now is type. 16:11:25 yeah, that topic format needs to be changed to address the pubsub spec 16:11:43 all good then? 16:12:20 rcarrillocruz: is all the history relayed on request? 16:12:27 Yeah, let’s talk a bit more about that though. We may be able to add some kind of an ack/nack layer to websocket clients so they themselves can manage which events they’ve handled. 16:12:49 it clould be a huge amount of data then 16:12:56 NikitaKonovalov: yes, so we need some filtering mechanism, since a 'replay' would contain a param saying 'just replay since day X from time Y' 16:13:46 Rather than thinking of it from a ‘replay history’ perspective, we could think of it from an atomic level and assert that individual messages were received. 16:13:57 That way we can forget about anything that wasn’t received. 16:14:00 Sorry 16:14:10 That way we can forget about anything that WAS received. 16:14:31 so you mean keeping a list of what's being sent per client connection? 16:15:04 so when the client triggers 'replay' we only send what was NOT sent? 16:15:06 rcarrillocruz: Well, I say create a persistent queue that collects messages and has some nice high upper bound, one that can store messages through a day or so of zuul being down. 16:15:26 ah 16:15:33 i think rabbitmq queues have a TTL 16:15:41 And on our version of rabbit that’ll slowly but surely accumulate, until a client connects and starts acknowledging that it’s consuming messages. 16:15:43 i.e. keep stuff just for X days 16:16:06 * rcarrillocruz digging 16:16:42 https://www.rabbitmq.com/extensions.html 16:16:47 I figure, that way we make rabbit do all of the history management for us, and don’t have to implement some crazy on-disk cache. 16:16:55 check for Per-Queue Message TTL 16:17:00 yep 16:17:22 so let's agree that, sure, a client can trigger replay on-demand, but not since the beginning of mankind 16:17:35 just up to a day or something (obv configurable) 16:18:10 rcarrillocruz: I don’t think it’s our job to cache messages in case a client dies horribly. 16:18:23 nod 16:18:41 All we can reqally guarantee is that we’ll send them every message, and assume they can handle it sanely 16:20:03 rcarrillocruz: But with that said, we can definitely create two queues, one as a replay cache and the other as the consumer cache. I’m just concerned about memory 16:20:26 yeah, i thought about that also 16:20:46 creating a 'backup' queue upon websocket opened and normal queue is created 16:21:12 but this is twice memory...so if we have a lot of consumers it can quickly kill the instance 16:21:32 well, then we could keep only a very short history, so that the client can recover from a short outage or a network issue 16:21:48 so lean towards having a global replay queue, that is capped (as you say a day or something) 16:21:55 I dont see a case where a client needs events for a whole day 16:21:58 I think we have two use cases here. One is replay of already-handled messages, while the other is guaranteed delivery. 16:21:58 and it's the client that says 'ok, replay me everthing from last 3 hours) 16:22:09 rcarrillocruz: That makes sense. 16:22:56 ok, i think we have a plan here 16:23:23 second topic: refresh tokens. The more i think about it, the more I think i should not handle this (i'm only working on the backned thing) 16:23:27 rcarrillocruz: If there’s a global replay queue, how do we guarantee that the queue will remain full for other clients, if one client drains it? 16:24:28 i think there's a persistence setting for messages for that 16:24:31 kk 16:24:31 but it's a good point 16:24:45 let's put a work item for me to have a POC on this idea 16:24:52 You got it 16:24:53 and i get back to you next week on the meeting 16:24:58 I thought a persistent qeue means the messages are kept until TTL 16:25:11 #action rcarrillocruz Figure out global replay queue edge cases. 16:25:31 NikitaKonovalov: I think none of us actually know the actual behavior of the system right now :) 16:25:42 rcarrillocruz: So, refresh tokens. 16:25:43 heh, indeed 16:25:46 so yeah 16:25:58 #topic Discussion: Subscription API Refresh Tokens 16:26:04 refresh tokens: i think that should be handled on the frontend, not backend (which is i'm working right now) 16:26:20 i've been looking at the code and the refresh code is mainly done in the SB-webclient 16:26:44 That mostly makes sense. What cases are we looking at here? 16:26:48 i.e. request contains a refresh token from OAuth endpoint, then token expires, it uses it to get a new token 16:26:55 but nothing on the backend 16:27:02 the backend simply cares about tokens 16:27:16 that is a Bearer , it's on the db storage and it's valid 16:27:20 Ok, so if I connect with a valid token and get a socket, and then that token expires… 16:27:24 this raises another thing: 16:27:40 we should have a section in the SB webclient for streaming 16:27:41 :-) 16:27:51 something i have no clue thre, not a frontend guy 16:27:54 if a client notices that his token expires soon, it may refresh it through REST and reestablish connection 16:28:25 NikitaKonovalov: I agree. What should happen on the socket connection handler serverside though? Should it drop the connection? 16:28:26 yolanda: would you work on this with me? 16:28:40 If we have ack/nack on the client, a suddenly dropped connection wouldn’t result in data loss. 16:28:46 hi 16:28:48 rcarrillocruz, sure 16:29:11 * yolanda cannot attend 100% to this meeting, tied with on-call issues 16:29:32 krotscheck: so the way i have it right now it's to close the websocket if check_access_token is not cool 16:29:41 yolanda: NO worries. 16:30:04 so, access tokens, but i don't check anything on the backend for refresh tokens 16:30:07 And then leave it for the webclient to get a good token. 16:30:15 yep 16:30:17 krotscheck: what about sending a warning while a token is valid 16:30:26 NikitaKonovalov: good point 16:30:39 if a client does not care it will be disconnected 16:30:40 Yeah, building a streaming handler for oauth tokens is basically trying to rewrite the oauth spec. Let’s not do that. 16:30:46 we can check for token validity, if it's going to expire we return to the websocket client 'hey, your token is gonna die' 16:31:08 but dunno...i think we should leave that to the frontend 16:31:19 a web browser websocket thingy can handle this 16:31:31 for command line websocket clients, we can leave that to implementors 16:31:33 rcarrillocruz: sure the consumer should care, not the backend 16:31:35 NikitaKonovalov: I don’t even think that’ll be necessary. If a client is disconnected, they’ll try to reconnect, get a 401, trigger the refresh token flow, then reconnect and keep consuming where it left off. 16:31:37 i really think it's out of the scope 16:32:37 And as long as the server queue remembers where we left off, we’re good. 16:32:50 krotscheck: then let's sent a "Disconnectin for expired token" message before the connection is physically dropped 16:33:05 so it would not like an unexpected error 16:33:21 krotscheck: so, that implies we have a grace period for a websocket connection death. meaning, we leave the associated queue open for some time, just in case they reconnect? 16:33:38 NikitaKonovalov: That’s fair. That way the APi explains itself. 16:34:08 rcarrillocruz: Is that something the client could ask for when they connect? 16:34:23 A queue TTL perhaps? 16:34:28 * krotscheck ponders this. 16:34:34 That’s going to get dangerous :/ 16:34:35 rcarrillocruz: it could be done. When the websocket is opened and the queue is created, we can return to the client 'ok, this is your queue ID' 16:34:54 so if the websocket dies, the client at least knows the id to bind to again when the connection is done again... 16:35:36 but we must weigh how long we keep those queues around....to not have leaked queues filling the instance memory... 16:36:34 Exactly, that’s what worries me. 16:36:46 That would fix our replay queue problem though :) 16:36:50 rcarrillocruz: is it an expensive operation to create a queue, if not than it makes more sense to drop and recreate every time 16:37:05 and reply will help if some messages were missed 16:37:19 hmm 16:37:31 let's see if the client drain of the global replay queue is doable 16:37:49 if not, we can look at keeping on-demand queues around upon websocket die... 16:37:53 decisions decisions... 16:37:55 :-) 16:38:28 Have fun with that :). Shall we move on? 16:38:45 yeah, sorry to hijack, thx folks! 16:38:46 i wanted feedback for that 16:38:47 https://review.openstack.org/147105 16:39:16 so that's a button to remove all recent events, question raised if there is a need to open a confirmation box or not 16:39:54 * krotscheck doesn’t have an opinion, but can appreciate the frustration of an accidentaly click. 16:40:04 yolanda: what about a confirmation box with "No ask me again option"? 16:40:29 NikitaKonovalov, and store the pref in user settings? 16:40:47 yes, that's the good place to store options 16:40:59 i was thinking on the modalbox but i also appreciate the situation where you have to clean tons of events and clicking on a modal box all the time 16:41:05 yolanda: maybe work on that as separate change? 16:41:34 zaro, an incremental change looks fine to me , yes 16:41:45 but i wanted to know feedback of people 16:42:24 It feels like we’re undecided, because none of us know how it would feel to use it. 16:42:47 ttx would have an opinion too 16:42:55 yeah, that would me my preference. just merge this one and work on user preference as another change. 16:43:03 so i can close the change without any confirmation then wait for feedback, sounds good 16:43:12 kk 16:43:15 I’m ok with that 16:43:26 I'm ok also 16:43:32 i'm on call this week but i'll try to find a hole to finish that 16:43:39 +1 16:43:46 I’ve got a couple of thoughts on improving the event list now that we have story_id’s in all relevant events. 16:44:10 What if we group the results we get from the server by story. 16:44:20 And then say: Here’s all the changes that happened in story X. 16:44:32 Someone commented, someone updated status, etc etc. 16:44:36 krotscheck: sounds good 16:44:46 And then only offer one ‘remove’ button which flushes all relevant events. 16:45:10 and ui could have a hide/show toggle so the resuls are displayed in a more compac way then 16:45:17 yep 16:45:27 12 things happened to story X: (show more) 16:45:42 sounds good to me, yes 16:45:56 Coool. Any other discussion topics? 16:46:04 what do you think about sorting that? by story id? 16:46:11 by latest update of some event? 16:47:05 I’d start with just sorting by the order they arrive in. 16:47:05 i like that. 16:47:06 yolanda: I think the story with the latest event should be on top, so yes by update times 16:47:23 Which is chronologically by date. 16:47:27 i almost want an entire page dedicated to events that i can sort and chop up anyway i want 16:47:28 yep 16:48:15 makes sense to me, but first have something simple in the dashboard then we could add an independent events screen 16:48:49 yep 16:48:56 Ok, let’s move on to ongoing work. 16:49:02 so why fuss over how it should look or return, just leave it up to user defined. 16:49:31 zaro: I guess I don’t understand what you mean by that. 16:49:55 Because to be able to have a user define something, the UI has to be able to render it that way first. 16:50:12 leave as is. make something that users can sort and filter whichever data they want out of it 16:51:03 zaro: Ok, do you want to take a stab at putting that together? 16:51:19 hmm, good point, not atm 16:51:25 :D 16:51:32 Alright, moving on. 16:51:33 #topic Ongoing Work (rcarrillocruz) 16:51:47 We more or less covered your stuff in discussion, anything else you want to add? 16:51:59 what was commented earlier , nothing else, nope 16:52:03 Coool 16:52:12 #topic Ongoing Work (jedimike) 16:52:23 Eaten by bandits? 16:53:12 I guess he’s not here. 16:53:20 #topic Ongoing Work (NikitaKonovalov) 16:53:26 How’s the client API coming? 16:53:47 The Stories/Tasks support is in progress 16:54:15 what I've noticed is that we cannot access tasks as a story subresource 16:54:49 so I'm now refactoring API controllers to support both /tasks and /stories/id/tasks enpoints 16:55:05 at lease the API will look more consistent 16:55:24 NikitaKonovalov: Ironic does something interesting with that, you might want to talk to that team to see how they have automatically nested resources. 16:55:58 krotscheck: we already have this in a lot of places like teams/users and project_groups/projects 16:56:12 and those work fine 16:56:20 Works for me. 16:56:30 so there is no reason why tasks would not work 16:56:43 Also I've finally updated the tags change 16:56:51 They managed to work some magic where any controller ended up being magically embedded as a subcontroller for every other controller 16:57:09 krotscheck: I'll have a look 16:57:11 Yay tags! 16:57:17 link for tags https://review.openstack.org/#/c/114217/ 16:57:24 NikitaKonovalov: It might be too much magic. 16:57:37 most comments resolved I hope 16:57:48 Yay tags :) 16:58:05 Is that all? 16:58:14 all from me 16:58:16 #topic Ongoing Work (yolanda) 16:58:59 krotscheck, sorry, on 2 conversations at same time 16:59:04 something is on fire here 16:59:13 Ok, we’ll let you pass then :) 16:59:19 #topic Ongoing Work (krotscheck) 16:59:34 I’ve been working on various utilities necessary for email & simplification. 16:59:57 The email templateing engine is up, and I’ve written a new, tested algorithm to evaluate the correct list of subscribers for a given resource. 17:00:33 I’ve also got a new UI where you can actually see which story a given comment was left on. 17:00:50 Small change, but omg so much better. 17:01:22 Anyway, my next big push is going to be to build an email outbox handler that just sends emails. 17:01:31 After that I’ll build the digest and the individual email consumers. 17:01:45 And now we’re out of time. 17:01:46 DOH 17:01:49 Thanks everyone! 17:01:55 #endmeeting