07:00:28 #startmeeting requirements 07:00:29 Meeting started Wed Dec 13 07:00:28 2017 UTC and is due to finish in 60 minutes. The chair is prometheanfire. Information about MeetBot at http://wiki.debian.org/MeetBot. 07:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 07:00:32 The meeting name has been set to 'requirements' 07:00:36 tonyb, prometheanfire, number80, dirk, coolsvap, toabctl, smcginnis 07:00:39 o/ 07:00:39 #topic rollcall 07:00:40 o/ 07:01:41 hi 07:01:41 \o 07:02:03 * tonyb is cooking dinenr so I might be a little laggy 07:02:07 tonyb: welcome back :P 07:02:17 \o/ 07:03:20 #topic Any controversies in the Queue? 07:03:57 the kubernetes thing? 07:04:14 https://review.openstack.org/#/c/526925/ 07:04:18 #link https://review.openstack.org/#/c/526925/ 07:04:26 I think a decision needs to be made about newton, it's dead and not gating (and not, on purpose, a part of the new zuul), I think we should close the open reviews for newton 07:04:45 wfm 07:04:47 dirk: ya, I was hoping to get a better response, but upstream seems like they want to uncap, so I'll vote 07:04:59 dirk: the newton thing worksforyou? 07:05:08 prometheanfire: sorry, yes, newton closing +1 07:05:24 prometheanfire: I see there are new updates in the kubernetes changeset that I haven't consumed yet 07:05:29 * dirk preferred sleeping instead 07:05:58 o/ 07:06:05 prometheanfire: Yeah anything on newton shoudl just be abandoned as a matter of course 07:06:15 we can't kill the branch yet but newton is EOL 07:06:27 k, I'll do that, ya, I know we can't kill the old branches 07:06:44 that kubernetes this is potentially disruptive so we shoudl get buy-in from affects projects 07:07:15 also it'd be nice if we could do something better than just cp the websocket-client 07:07:21 #agreed closing open reviews about stable/newton branch as the rest of newton is EOL and gating is no longer working 07:08:15 tonyb: think we should push upstream for a realease? I think we should maybe a 4.0.1 or something 07:08:48 prometheanfire: We shoudl request it and/or research as to why it's there 07:08:58 they may have a compelling reason 07:09:26 no, they are in progress on uncapping it 07:09:46 https://github.com/kubernetes-incubator/client-python/issues/413 07:09:51 #link https://github.com/kubernetes-incubator/client-python/issues/413 07:10:04 prometheanfire: can you add that link to the git commit message? 07:11:02 https://github.com/kubernetes-incubator/client-python/commit/c4aac96342a1c3444b3eedf0a9da63353e25cf3d 07:11:12 the commit message on why the cap there isn't very telling. but it says it has issues 07:11:42 prometheanfire: okay well then I guess we'ev done what we can. Having the cap is gross but as long as we're confident we can get it removed before we cut queens 07:12:18 tonyb: I think so 07:12:51 cool 07:12:59 the last comment is pretty positive 07:13:18 dirk , according to dims that was the issue for the cap - https://github.com/kubernetes-incubator/client-python/issues/262 , i tried to reproduce it in all websocket-client without success .. 07:14:07 ok, I'll submit a PR refrencing it then 07:14:41 leyal: great pointer! 07:14:48 prometheanfire: looks like we want to include this link: https://github.com/kubernetes-incubator/client-python/pull/299 07:15:27 sure 07:15:41 tonyb: I wonder -- is any project affected by the webclient cap that is not *also* using kubernetes? 07:16:04 dirk: give me 5 ..... 07:16:07 * dirk can spent 15 minutes on scripting it to find the answer himself or just ask tony who probably has the magic script ready ;-) 07:17:18 dirk: I have the tool, but the data is out of date so it'll take a little while 07:17:29 or I could just be hacky .... 07:18:10 http://codesearch.openstack.org/?q=websocket-client&i=nope&files=.*requirements.*&repos= 07:18:13 that'd work 07:18:31 and it's not in setup.* 07:18:31 Eyal Leshem proposed openstack/requirements master: Use kubernetes client 4.0.0 https://review.openstack.org/526925 07:18:40 [tony@thor openstack]$ grep websocket `grep -l kubernetes */*/*requirements.txt` 07:18:43 openstack/requirements/global-requirements.txt:websocket-client>=0.33.0 # LGPLv2+ 07:18:46 openstack/rpm-packaging/global-requirements.txt:websocket-client>=0.33.0 # LGPLv2+ 07:18:47 leyal: ping 07:18:49 openstack/rpm-packaging/requirements.txt:websocket-client>=0.33.0 # LGPLv2+ 07:18:52 so the answer is no 07:19:07 prometheanfire , pong 07:19:40 dirk: unless openstack/rpm-packaging/requirements.txt is a legit hit I thought it might be a false positive 07:20:48 leyal: we are talking about your patch :D 07:22:07 prometheanfire , yep i am following , thanks :) , i wait for response in https://github.com/kubernetes-incubator/client-python/issues/413 , but i didn't mange to reproduce the issue in all version that avialble in pypi.. 07:22:31 leyal: ya, I'm making a PR to remove the cap now 07:23:26 tonyb: thats a false positive 07:23:34 tonyb: rpm-packaging has a full copy of requirements/*txt files 07:23:55 dirk: okay, that was more or less what I thought 07:23:56 prometheanfire , thanks - so the plan is to insert it now with the cap , and remove it when the PR will be accepted ? 07:24:02 so agreement to move forward with the cap? 07:24:10 leyal: we'd also need an upstream release 07:24:11 as it doesn't seem to affect anyone :) 07:24:19 dirk: yep, agreed 07:24:47 #agreed insert a cap on webclient-socket until kubernetes 4.0.1 removes the cap 07:24:52 tonyb: can you revisit your -1 ? 07:25:10 dirk: I only just added it. 07:25:50 dirk: I still think bumping 3 major releases in a single bound without reaching out to the 3 consumers would be a bad idea 07:26:51 leyal: do you mind sending a email to the mailing list asking for feedback from the other consumers of kubernets-client? 07:27:21 tonyb: ah, yes, I agree 07:27:38 prometheanfire, will do - need to ask about the cap or about the update for 4.0.0 ? 07:27:43 to be fair the bump to 4.0 is something we're undoing in the uc update as of eternity already 07:28:00 no need to ask about the cap, more about the big jump in kube-client version 07:28:24 https://github.com/kubernetes-incubator/client-python/pull/416 btw 07:28:35 dirk: Yeah, and gettign to a modern library would be good. 07:29:03 ok, moving on then 07:29:13 any other controversies in the queue? 07:29:43 not from me 07:29:54 ok, moving on 07:29:56 #topic PTG 07:30:25 nothing here, sent in the request for time and space, but no progress past that 07:31:13 space an time is an illusion anyway ;-) 07:31:16 and 07:31:39 something like that 07:32:00 and? 07:32:03 oh, sed 07:32:10 #topic open discussion 07:33:44 ok, closing in a couple of min 07:33:52 One question: is anyone looking at why propose constraints update is no longer working?? 07:34:22 It seems to be missing for this week 07:34:24 * tonyb didn't know it was broken ... clearly I've been distracted 07:34:26 bot not running? 07:34:31 ya, me too 07:34:39 And i cannot find logfiles 07:34:53 Well, zuul periodic run I think 07:35:38 It used to be somewhere on logs.o.o but I can't see it there anymore 07:35:46 gimme 5 07:35:56 k, guess that's something to look into :| 07:36:14 http://logs.openstack.org/e3/e31a2077720bb7b95a16fb18d13011eeb592c126/post/ 07:36:26 is the post pipeline for the current master 07:36:34 propose updates is there 07:36:39 Sorry 07:36:51 oh propose *constraints* update 07:36:52 zuul has been having problems, maybe that messed with things the last couple of days 07:36:53 I meant periodic generate constraints 07:36:59 that wont be in that list .... 07:37:28 So tonyb takes up the action item? 07:37:49 welcome back, here's more work :P 07:37:57 :-) 07:38:06 last periodic pipeline ... http://logs.openstack.org/periodic/git.openstack.org/openstack/requirements/master/ 07:38:28 it looks like some how it's now omitted from the list of jobs 07:38:41 :| 07:38:51 Ah, so that's the new location 07:39:05 dirk: Sure I'll look at it tomororow but zuulv3 is still outside my wheelhouse 07:39:38 I can do some grepping during my meetings 07:39:59 I will ping you if I find it before your tomorrow 07:40:11 dirk: cool 07:40:17 k 07:40:24 and I'll go to sleep :P 07:40:32 anything else? 07:40:47 Nope 07:40:57 #endmeeting