17:00:38 #startmeeting craton 17:00:40 Meeting started Thu Mar 2 17:00:38 2017 UTC and is due to finish in 60 minutes. The chair is thomasem. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:43 The meeting name has been set to 'craton' 17:00:56 #chair sigmavirus sulo jimbaker thomasem 17:00:57 #link https://etherpad.openstack.org/p/craton-meetings 17:00:58 Current chairs: jimbaker sigmavirus sulo thomasem 17:01:14 #topic Roll Call 17:01:16 o/ 17:01:19 o/ 17:01:20 o/ 17:01:25 git-harry: the root cause of that bug is that the columns are not fixed so we sometimes sort on columns that have none in them 17:01:26 o/ 17:01:37 if we had consistent column ordering, that would be better 17:01:42 sigmavirus: no, that is related but not the root cause 17:01:50 o/ 17:02:18 sulo: you have time for this meeting? 17:02:24 yes .. iam here 17:02:32 awesome 17:03:24 I think we can skip action items. All are being punted. 17:03:32 so some topics: demo monday, and its support (data ingest, tmux/other access) 17:03:36 git-harry proposed openstack/python-cratonclient master: Add table formatter sort key https://review.openstack.org/440617 17:03:46 plus critical bugs 17:03:56 jimbaker: would appreciate if you can send the invite for demo 17:04:05 farid, ^^^ 17:04:08 * thomasem would really like for us to make a habit of adding topics to the etherpad 17:04:15 thomasem is entirely right! 17:04:24 * farid ack's 17:04:27 git-harry proposed openstack/python-cratonclient master: Add devices-list to support /v1/devices https://review.openstack.org/438561 17:05:08 #topic Demo Monday 17:06:20 So, as I understand it we're going to go down the list of use-cases from Bjoern and map them to a demonstrable thing in Craton 17:06:27 thomasem, ack 17:06:50 We have farid's ppt in flight 17:06:58 thomasem: to be fair, I don't oft have topics to add ahead of time =P 17:07:04 But I agree 17:07:21 That is fair, indeed. :P 17:07:34 it's the nature of collab - we figure this out as we go at times 17:07:43 #topic bug https://bugs.launchpad.net/python-cratonclient/+bug/1668221 17:07:44 Launchpad bug 1668221 in Craton's Python Client "Random NoneType() < int() error from cli" [High,In progress] - Assigned to git-harry (git-harry) 17:08:02 So sigmavirus and git-harry were chatting about this one, let's get that chat resolved 17:08:43 So this is related to (at least) the fact taht we don't have a consistent column ordering 17:08:48 what that means is that we always sort on the first column 17:09:08 so there are times where the first column has the potential to have None in it as well as another type. 17:09:12 ahh. and why are we sorting in the client? 17:09:36 shouldn't this respect what the api produces, given pagination? 17:09:45 If we pick a consistent column ordering and consistent column to sort on, then that negates the need to take into consideration None 17:09:58 jimbaker: so this formatter is a reimplementation of something ripped off from the rest of openstack 17:10:05 sigmavirus: --fields 17:10:08 We can default it to order based on what's returned 17:10:12 +1 17:10:15 git-harry: --of-gold 17:10:19 :) 17:10:57 i feel like the stuff that has been ripped out from the rest of openstack has been of unfortunate dubious quality. sorry 17:11:10 jimbaker: no offense taken. I didn't rip it off 17:11:12 =) 17:11:13 sigmavirus: you can order the columns to try and work around the issue and that will likely work most of the time but if someone specifies a set of fields that will nullify the workaround 17:11:14 indeed 17:11:31 git-harry: which is why jimbaker and I agree that no ordering on anything by default makes sense 17:11:40 exactly, this problem just goes away 17:12:10 and py2 vs py3 sorting issues 17:12:15 magically gone 17:12:17 wait, are you suggesting stripping the client-side sorting and and just taking it from the api? 17:12:22 yes 17:12:38 given pagination, does it even make sense? no 17:12:45 okay, then yes I agree 17:13:11 git-harry: the entire concern about sortby_index goes away 17:13:22 it can mean we should default some cols to sort on, etc, in the api req 17:13:29 so the api does the right thing 17:13:35 jimbaker: the API defaults to [id, created_at] 17:13:58 sigmavirus, got it, otherwise pagination would just NOT work 17:14:03 as it is now 17:14:06 jimbaker: correct 17:14:06 so yeah, we are good 17:14:28 On that topic, for projects we probably need to just sort on created_at or name or something other than ID, lol. 17:15:03 thomasem, well that's a separate question: maybe we should default to name, ..., instead of id 17:15:05 thomasem: right, we should update the API to that default 17:15:06 in general 17:15:14 yes 17:15:16 jimbaker: meh 17:15:28 ordering on id makes enough sense for most 17:15:29 ok, a bikesheddable moment it seems :) 17:15:32 lol 17:15:38 I agree it's silly for UUIDs 17:15:41 Ruh roh, Scoob. Just a passing comment! 17:15:45 moving on! 17:16:09 #topic vars support in CLI 17:16:11 jimbaker: you had questions? 17:16:20 * sigmavirus will brb 17:16:30 yes, just wanted to find out if code is ready to try out 17:16:40 sans testing and all of course 17:17:00 Oh, just for projects. I will be adding the support to the others sometime soon here. 17:17:02 i did take a look yesterday, and it was very much WIP, which is fine 17:17:11 ok, that works for me 17:17:38 thomasem, also - could you update the commit message if you haven't already for specific usage 17:17:51 jimbaker: that seems like a bit too much to ask =P 17:17:52 Yes, mind making a note of that on the review? 17:18:07 that's the one place it's been documented. other docs elsewhere are fine - but stripped down in the commit msg is fine 17:18:35 * thomasem should have requested acceptance criteria 17:18:49 thomasem, we changed the acceptance criteria yesterday 17:18:55 midflight, love it 17:19:05 How can we change what does not exist? 17:19:08 ;) 17:19:51 Anywho, please note on the review things you'd like to change. Keep in mind time constraints, though, please. I don't wish to scope creep this thing into oblivion. 17:19:54 there was some degree of acceptance criteria in the bug, or something. but regardless: just want to ensure we communicate how we use the var manager to work with vars. that's all 17:20:22 Sure 17:20:28 thomasem, well, all i ask is get the ability to get/set/delete vars with respect to a resource. the hows, i don't care too much about 17:20:52 Sounds good 17:20:57 ok, i will try out after this meeting and ask thomasem if i run into problems 17:21:02 done? 17:21:05 yeppers! 17:21:22 Lemme know what blows up 17:21:39 will do 17:21:43 excellent 17:21:47 Any other topics? 17:21:54 folks wish to discuss 17:22:02 data ingest 17:22:07 #topic data ingest 17:22:38 antonym, cloudnull, sulo, zz_pwnall1337, this is relevant to your work 17:23:42 So, where does this all live right now? The scripts. 17:23:58 that's my first question 17:24:02 as well 17:24:36 thomasem, my ideal is we can have a rosetta stone of sorts 17:24:37 I'd appreciate a way to POST with a list of devices for ingest. 17:24:49 that way i can do imports a cab at a time 17:24:50 three different ways to accomplish this import 17:24:52 or something similar 17:25:11 iterating through the client is error prone and clunky. 17:25:22 cloudnull, yeah, so we will have that in https://bugs.launchpad.net/craton/+bug/1661714 17:25:22 Launchpad bug 1661714 in craton "Bulk endpoint for working with resources" [High,Confirmed] - Assigned to Thomas Maddox (thomas-maddox) 17:25:42 thomasem, still plan to work on that? (after other work?) 17:26:09 jimbaker: Absolutely, but I do not want to prevent someone else from working on it if others' have bandwidth and no higher priority. 17:26:22 s/others'/others/ 17:26:42 re rosetta stone and 3 ways: 1. REST API; 2. python client; 3. CLI. and of course the bulk endpoint is sort of the real way to do this 17:27:19 do we need an endpoint for this specifically though ? 17:27:20 anyway, just wanted to make sure we look at this usage from a completeness perspective 17:27:31 Some questions around that, actually. 17:27:39 So, bulk endpoint in API - we wouldn't want it to block would we? 17:27:42 if we call the hosts endpoint i should be a post a list and the api do the right thing 17:27:49 cloudnull, this gets into overloading with respect to v1/hosts 17:27:56 etc 17:28:05 overloading ? 17:28:30 does POST v1/hosts (etc) take a single resource; or it can also a take a list of resources 17:28:34 ? 17:28:51 I 17:29:02 or am i misconstruing your request? 17:29:23 **imo it should take a list? 17:29:52 it can be a list of 1 17:30:04 or many 17:30:07 i think the opposite consideration is, we have a number of collections that require to be updated; and that's why we were looking at a bulk endpoint 17:30:35 eg, you will want to post a cloud of 1-of-more-regions, etc, ... down to the vars 17:30:50 yes. 17:30:57 Yeah, I understood it as someone wanted to import an entire, say, project. 17:31:25 thomasem, very valid way of thinking about it from a scoping perspective 17:31:37 but if im thinking of this like a DC, I'd like to POST a cab at a time 17:31:47 so something like importing 22 nodes 17:31:51 **hosts 17:31:54 so maybe v1/projects is the bulk endpoint.... 17:32:03 then the network gear for tha cab 17:32:04 etc 17:32:10 Hmmmmm 17:32:25 right. it should be easy to add a new cab 17:32:42 IE if i had to create the project, cell, etc, with a single api call... so be it. 17:32:45 So, here's my problem with this 17:32:55 but not having the option to POST many hosts at a time is bummer 17:33:06 sure, we get that 17:33:15 Bulk won't scale well and if we're uploading a huge cloud, it could be a problem if we're expecting to block on that operation. 17:33:29 thomasem, well i look at this as the same problem as pagination 17:33:31 Unless we want to use more like "job" semantics around it. 17:33:55 we don't let you download the whole cloud now either, so to speak 17:34:01 jimbaker: The problem is we want an import to be atomic.. no? 17:34:01 thomasem: the api should return 202 once the payload has been recieved and get to work behind the scenes 17:34:14 there should be no need to block on the client 17:34:26 Correct, but what if something fails halfway through? 17:34:30 right, no need to block on the client. but we are just doing db ops 17:34:39 these are not long running tasks of bring stuff up 17:35:02 as it is, we already assume that long running stuff is treated differently, in workflows 17:35:14 So this would be long-running 17:35:15 thomasem, i didn't say bulk was easy :) 17:35:33 thomasem, i very much disagree with that premise. here's an alternative way we could have done pagination 17:35:38 we could have kept a cursor open 17:35:40 thomasem: if it forked and moved the request over to a transaction id which could be queied then we could pull to see when the op was completed. 17:35:43 then page through it 17:35:52 Exactly 17:35:54 that is a terrible idea 17:35:59 if it failed we would know . 17:36:04 can anyone point me to the script that has been written for doing this on the client side? 17:36:17 cloudnull: that's exactly my point... We want to communicate the success/failure of the import 17:36:20 and handle it accordingly 17:36:22 git-harry, right, we just asked about this 17:36:32 if something goes wrong, we'll want to undo what we've already done. 17:36:35 jimbaker: yes, and I didn't see an answer 17:36:48 i know that we discussed putting this in a osic ops repo 17:37:08 But, then, why not submit to a /v1/imports resource? 17:37:12 couldn't we just not commit the insert and report back via the transaction id that the pOST failed ? 17:37:19 That will create an import "job" and give you back a place you can watch 17:37:52 thomasem, so the question is: what's the maximum size of the submitted resource for that import job? 17:38:16 i mean we can control the payload size within the api 17:38:45 we are going to do it in some batch. how do we size this batch? 17:38:56 I don't imagine the client needs to care? 17:39:09 thomasem, currently it does with pagination, however... 17:39:24 just asking - what's the difference? 17:39:29 How do you break up an entire cloud import? 17:39:31 What would that look like? 17:39:52 i didn't say it was trivial :) 17:39:58 jimbaker: I know 17:40:01 I never expected it was 17:40:30 heh 17:40:43 The difference is in pagination it's a list of resources of the same type 17:40:58 anyway... here's the takeaway: we have to think about this more 17:41:32 Rather, they're equal in the hierarchy of the response. 17:42:08 i did like our discussion re graphql 17:42:37 although i don't know if it solves for import. it certainly helps with export 17:42:51 not to say we should do exactly this but Miguel (he a racker) has a fairly straight forward approach to doing async w/ flask -- https://blog.miguelgrinberg.com/post/using-celery-with-flask -- in that way we could have a simple-ish solution to the issue of bulk import and being able to release a client call. 17:43:16 which would should scale fine . 17:43:32 without also needing us to reinvent this wheel 17:43:43 i doubt the problem turns on async or not 17:44:22 jimbaker: re: [11:33] Bulk won't scale well and if we're uploading a huge cloud, it could be a problem if we're expecting to block on that operation. 17:44:38 cloudnull: I totally agree it should be async. 17:44:39 and hence my discussion of pagination... 17:44:55 and analogs of 17:45:01 But I don't think pagination solves this. Anyway, we can get into the details of that design. 17:45:08 But it sounds like that's not today? 17:45:14 most certainly not 17:45:30 yea, im not sure pagination is what we're looking for here. 17:46:17 Alrightyo. Any other topics folks want to discuss? 17:46:21 Or right into Open Discussion? 17:46:40 +1 17:46:51 #topic Open Discussion 17:48:59 crickets 17:49:04 Okay, heads down, everyone? 17:49:08 yes 17:49:09 zz_pwnall1337: around ? 17:49:14 #endmeeting