Thursday, 2019-07-25

*** whoami-rajat has quit IRC00:01
*** ricolin has joined #openstack-tc00:55
*** wxy-xiyuan has joined #openstack-tc01:11
*** tdasilva has quit IRC01:20
*** tdasilva has joined #openstack-tc01:21
*** mriedem has quit IRC01:49
*** whoami-rajat has joined #openstack-tc03:06
*** Luzi has joined #openstack-tc05:03
*** jaosorior has quit IRC06:03
*** jaosorior has joined #openstack-tc06:20
-openstackstatus- NOTICE: The git service on opendev.org is currently down.06:51
*** ChanServ changes topic to "The git service on opendev.org is currently down."06:51
*** jamesmcarthur has joined #openstack-tc07:04
*** iurygregory has joined #openstack-tc07:15
*** adriant has quit IRC07:17
*** adriant has joined #openstack-tc07:18
*** jaosorior has quit IRC07:57
*** tosky has joined #openstack-tc08:30
-openstackstatus- NOTICE: Services at opendev.org like our git server and at openstack.org are currently down, looks like an outage in one of our cloud providers.08:35
*** ChanServ changes topic to "Services at opendev.org like our git server and at openstack.org are currently down, looks like an outage in one of our cloud providers."08:35
*** ChanServ changes topic to "OpenStack Technical Committee office hours: Tuesdays at 09:00 UTC, Wednesdays at 01:00 UTC, and Thursdays at 15:00 UTC | https://governance.openstack.org/tc/ | channel logs http://eavesdrop.openstack.org/irclogs/%23openstack-tc/"08:42
-openstackstatus- NOTICE: The problem in our cloud provider has been fixed, services should be working again08:42
*** jamesmcarthur has quit IRC08:48
*** lpetrut has joined #openstack-tc09:15
*** lpetrut has quit IRC09:16
*** lpetrut has joined #openstack-tc09:16
*** e0ne has joined #openstack-tc09:32
*** jaosorior has joined #openstack-tc10:47
*** adriant has quit IRC11:07
*** adriant has joined #openstack-tc11:08
*** jaosorior has quit IRC11:08
*** mriedem has joined #openstack-tc11:51
*** sapd1_ has joined #openstack-tc11:59
*** sapd1 has quit IRC11:59
*** iurygregory has quit IRC12:11
*** iurygregory has joined #openstack-tc12:11
*** jeremyfreudberg has joined #openstack-tc13:17
*** jaosorior has joined #openstack-tc13:23
*** Luzi has quit IRC13:33
*** ricolin has quit IRC13:36
*** AlanClark has joined #openstack-tc13:41
*** jaosorior has quit IRC13:43
*** iurygregory has quit IRC13:59
*** ijolliffe has joined #openstack-tc14:01
*** iurygregory has joined #openstack-tc14:02
*** AlanClark has quit IRC14:21
*** ijolliffe has quit IRC14:43
*** lbragstad has joined #openstack-tc14:51
*** zaneb has joined #openstack-tc14:56
fungiit's thursday office hour yet again15:00
mnaser\o/15:00
*** ricolin_phone has joined #openstack-tc15:00
evrardjpo/15:01
ricolin_phoneO/15:02
*** zaneb has quit IRC15:02
mnaserso uh15:02
mnaseri'm pretty concerned at the current state of magnum15:02
mnaseri personally have struggled to get any code landed that fixes fundamental bugs that affect our deployment and had to unfortunately run a fork with cherry-picks cause its taken so long15:03
lbragstado/15:03
mnaserthere's 4 reviewers in magnum-core, 2 of which havent renewed things for months15:03
mnaserand the 2 "active" cores, i've emailed and asked to add a few potential ones but with no action done15:03
*** zaneb has joined #openstack-tc15:03
*** zbitter has joined #openstack-tc15:05
mnaseri'd like for this project to succeed but .. i dont know what to do at this point15:06
mnaserit's broken out of the box right now.15:07
*** zaneb has quit IRC15:08
lbragstadhas anyone floated the idea of expanding the core team to allow for more review throughput?15:09
lbragstadother teams have done that to land critical patches15:10
evrardjpmnaser: so basically the action item of last time didn't result in a good outcome for you?15:11
evrardjplbragstad: it was proposed in these office hours I think... maybe 2 weeks ago? mnaser when did you bring that up last time?15:11
lbragstadaha15:12
mnaseri did that, i proposed two people (not me :]) and there was some "yay good idea"15:12
mnaserbut no actionable things15:12
evrardjpoh I thought you were planning to send an email15:12
mnasernope i sent it out already, got a positive response but no action was taken (or maybe those indivuduals refused to be on core)15:12
mnaserfor all i know, nothing changed15:13
evrardjpok15:13
evrardjpso basically you're saying we effectively lost the way to provision k8s cluster on top of openstack using a community project.15:13
jrollprovision a k8s cluster via an API*15:14
evrardjpsorry to sound harsh, but that sounds like a big deal to me :p15:14
jrollusing an openstack community project*15:14
evrardjpjroll: thanks for clarifying15:14
jrollthere are definitely FOSS projects to provision k8s clusters :D15:14
evrardjpthat's exactly what I meant15:14
evrardjpofc you can still do it with heat and/or terraform using opensuse tools :)15:14
mnaseri mean, i think it can work with like 1 specific config15:15
mnaserbut the defaults (like for example flannel) are broken15:15
evrardjpwrong smiley but everyone got the idea there :)15:15
mnaserthey used to work but now out of the box, you cant just upload a fedora image and install magnum and get a cluster, most of that doesnt work out of the box15:15
jrollyeah, that's rough15:16
zbitterso IMHO for provisioning within your own tenant, everything is going to move to cluster-api in the medium term15:16
*** zbitter is now known as zaneb15:16
jrollright15:16
zanebwrong nick15:16
evrardjpzaneb: do you think the current cores of magnum have moved to that?15:17
zanebmanaged services could be a different story15:17
zanebevrardjp: I have no idea15:17
mnaseryeah, but that's still in a little while (cluster-api) and that requires a "hosted" k8s cluster that a user has to self host15:17
zanebjust looking in my crystal ball here15:17
jrolldoes magnum still suffer from the security problems with service VM authentication and such?15:17
*** dklyle has quit IRC15:17
mnaserso it implies that user needs to a) create the "bootstrap" cluster ?? somewhere ?? and then use that to deploy again it15:17
mnaserjroll: no it mostly uses heat to orchestrate this stuff15:17
mnaserwith the os-collect-config and it's friends15:18
evrardjpmnaser: the bootstrap cluster can be pivoted to the final cluster after creation15:18
evrardjpif necessary15:18
mnaserevrardjp: right, but that's not exactly a good experience15:18
zanebmnaser: there's a bootstrap host, but you can run it on a VM on your laptop and it goes away once the cluster is up15:18
*** dklyle has joined #openstack-tc15:18
mnasermaking an api request to get a cluster15:18
mnaservs having to setup a cluster locally to get a cluster15:18
mnaserbut that's a whole another discussion :)15:18
evrardjpmnaser: the problem is not that, the problem is that we have software on our hands that don't match user expectations due to brokennesss15:18
evrardjplet's not fix cluster api right now :)15:18
jrollmnaser: so the magnum VMs are all owned by the tenant, and don't talk to magnum or anything? and so the user has to manage the cluster post-provision?15:18
mnaserjroll: yes, its a user-owned cluster, magnum talks to it but over public apis15:19
mnaserand it only talks to it to scale up or down afaik15:19
mnaserit doesnt necessarily provide an "integration" point15:19
mnasermaybe it does upgrades now? i dunno :)15:19
jrollhm, ok15:19
mnaseralso, it deploys on top of fedora atomic 27, we're at 29 and atomic is being replaced by fedora coreos i think15:20
mnaserso its quite seriously lagging behind in terms of being a solid deliverable :X15:20
jrolljust trying to figure out how much value magnum actually adds, and how much I care if it ends up going away15:20
zanebF27 has been EOL for quite a while15:20
mnaserwell i think you would care about it because: it provides an easy self-serve way for users to get a k8s cluster on demand15:20
mnaserusers *and* machines (think zuul k8s cluster for every job, lets say)15:21
jrollsure15:21
jrollbut if other things in the landscape do it better... ¯\_(ツ)_/¯15:21
jroll(I don't yet know if they do)15:21
evrardjpso many tools15:22
mnasercluster-api is an option, but its still very early and you might as well as jsut deploy a bunch of vms and use kubeadm at that point if your'e going to go through the hassle of setting up a local vm/cluster to pivot to etc15:22
*** ricolin has joined #openstack-tc15:22
jrollthere's metalkube if you're deploying on bare metal15:23
mnaseri mean i spent a significant time trying to refactor things and trying to leverage the existing infrastructure inside magunm15:23
*** e0ne has quit IRC15:23
zanebjroll: there will be but it's even earlier for metalkube15:24
mnaserso that it relies on kubeadm to deploy instead of all the manual stuff it has now15:24
jrollzaneb: ok15:24
mnaserbut.. i cant merge simple stuff righ tnow15:24
mnaseri can't start working on something more complex and knowing ill just end up having to run a fork.15:24
jrollright15:24
jrollwe also can't expect to keep every current openstack project alive, unfortunately15:25
evrardjpjroll: agreed.15:25
mnaserright, but i guess in this case, someone is ready to do the work and cleanup, but their hands are tied up :)15:25
jrollso I think when these things come up, it's important to ask questions like "how many people use this", "how valuable is it", "are there other tools that do it better", etc15:25
evrardjpcf. convo we had at last ptg about projects dying15:25
mnaserand i've tinkered with the idea of: forking magnum and ripping out all the extra stuff in it which is cruft/extras15:25
mnaserand make it a simple rest api that gives you k8s containers.15:26
*** ijolliffe has joined #openstack-tc15:26
mnaser(cause remember, magnum is a "coe" provisioner and not k8s one, but it only mostly is used for that)15:26
fungisorry, in several discussions at once, but the biggest complaint i've heard about users running kubernetes in their tenant is that it results in resources which are consumed 100% of the time (to maintain the kubernetes control plane), and second is that it means the users have to know how to build and maintain kubernetes rather than just interact with it15:26
zanebmnaser: wait, did you just describe Zun?15:28
mnaserzaneb: zun afaik *uses* k8s clusters to run "serverless" (aka one time) workloads15:28
evrardjpI don't think so15:28
zanebnope15:28
evrardjpisn't zun just running containers?15:28
zanebthat's a different project15:28
mnaseri meant rather than having magnum be a "coe" deployment tool, it becomes a "k8s" deployment tool only15:28
evrardjpI thought zun would integrate into k8s15:28
jrollI read it as "gives you k8s clusters", not containers, but I'm not sure. but "k8s containers" aren't really a thing so15:28
mnaseroh yeah thats qinling15:29
ricolinevrardjp, more try to do serverless IMO15:29
zanebzun is a Nova-like API for containers instead of VMs15:29
mnaseryes yes my bad15:29
evrardjpI understood it like zaneb :)15:29
* ttx waves15:29
mnaserwell no, i just meant removing all the stuff that lets you deploy dcos/swarm15:29
evrardjpmnaser: our governance allows you to create and/or fork magnum project to make it what you want15:29
mnaserand make it something that deploys k8s only and nothing else15:29
zanebmnaser: right, yeah, fair to say that k8s has beaten Mesos and whatever that Docker rubbish was called at this point15:29
evrardjphahaha15:30
mnaserand that would be a cleaner api than the current "label" based loose api15:30
evrardjpnot sure that's politically correct, but that surely made me smile15:30
jrollso if you're deleting most of the code and changing the API... sounds like a new project :)15:30
evrardjp:)15:30
* mnaser shrugs15:30
evrardjpand then the question becomes -- why this vs using terraform for example?15:31
ttxYes, Zun just runs containers (not on K8s)15:31
mnaserevrardjp: the api15:31
evrardjpso yeah one is an API, the other one is client based15:31
mnaserPOST /v1/clusters => get me a cluster15:31
ttxThere is definitely value in having "K8s clusters as a service"15:31
mnaserGET /v1/clusters => heres all my lcusters15:31
evrardjpI get the idea, I am just playing devil's avocate here15:31
mnaserthe state is not local which is huge value for anything that grows to more than one person imho15:31
ttxNow there may be simpler ways of doing that than going through Heat and disk images15:32
zanebttx: is there though? to me it depends on whether you're talking about a managed service or just a provisioning tool15:32
evrardjpadvocate*15:32
ttxzaneb: obviously useful for public clouds to do a GKE-llike thing15:33
ttxbut also for others (think CERN)15:33
zanebttx: GKE is a managed service. I agree that's valuable15:33
evrardjpttx: it's also good for implementing provider of cluster-api15:33
ttxOn my recent trip I had multiple users that were interested in giving that service to their users15:33
*** ricolin_ has joined #openstack-tc15:33
jrollI agree there's values in being able to push a button and get a k8s cluster, I'm just wondering if there's value in that being an openstack project. it's a layer above the base infrastructure.15:33
mnaserimho talking to users, people seem a lot more interested in prividing k8s-as-a-service15:33
ttxtheir users in that case being internal developers15:33
evrardjpbut there is already a cluster api provider for openstack15:33
mnaserwhich is in alpha15:34
mnaserand all of cluster-api is being rewritten15:34
jrollare k8s contributors still mostly avoiding working on openstack projects because they're labelled openstack?15:34
evrardjpyeah that's fair15:34
*** ricolin_phone has quit IRC15:34
mnaser*and again* cluster-api is something you have to build a local VM or a cluster $somewhere to be able to make it work15:34
jrolls/still//15:34
mnaserjroll: i dunno, i dont think so?15:34
evrardjpjroll: I don't think it's that15:34
ttxjroll: I think there is. As demonstrated by all the public clouds15:34
mnaserttx: and yeah i agree, users want to provide a managed k8s as a service as they dont want to maintain things for their users15:34
mnasererr, they WANT to maintain them15:35
* ricolin_ just failing in connection so resend...15:35
mnaserand make sure their clusters are all up to date, etc15:35
ricolin_magnum make a nice bridge between k8s and OpenStack, which include connect the management work flow across also like integrate autoscaling for k8s on top of OpenStack, we do can try to figure out what other options we have but we need to also remember we have to also thing about those integration too15:35
ttxmnaser: did you contact the good folks at CERN for an assessment ? They droev Magnum recently, depend on it15:35
jrollso if there's so many clouds that want k8s as a service. and we have a project that does that, but is broken. why aren't these clouds contributing to making it work? or why is vexxhost the only one?15:35
jrollare they all running a fork or?15:35
ttxOVH developed their own Kubernetes as a service solution15:36
fungijroll: also some are avoiding working on openstack because they keep hearing from various places that openstack is a dead end and they should pretend it never existed15:36
lbragstad^ that's my question15:36
lbragstader - i have the same question :)15:36
ttxThey tried Magnum, fwiw15:36
mnaserttx: i have emailed both cores (one which is at cern), again, they were excited about having cores but no actionable things happened15:36
ttxBut they may have reached the same conclusion mnaser did15:36
mnaserafaik city also runs magnum15:36
*** ricolin has quit IRC15:36
ricolin_ttx Catalyst too depends on magnum IIRC15:36
mnasercatalyst is in magnum-core15:36
ttxLike, you need to deploy Heat to have Magnum15:37
fungigetting flwang to provide an update on the current state of magnum from his perspective may be a good idea15:37
ttxif you don;t want that, you may prefer to build your own thing15:37
ricolin_fungi, +115:37
ttxAnd I'd say, deploying for Kubernetes in 2019 you have lots of simpler options15:37
lbragstadadriant might know something about it too - i want to say he worked with magnum some in the past at catalyst15:37
zanebmnaser: I think it'd fair to say that cluster-api might be 6-12months away from primetime, and we have a short-term interest in maintaining magnum to cover that gap, but it's not totally surprising that people aren't queueing up to invest in it15:37
mnaserzaneb: i agree but i cant imagine cluster-api is replacing it.  unless we implement some really badass way of running the bootstrap cluster side by side with openstack15:38
ttxzaneb: my understanding is that Cluster-API would not alleviate the need for magnum-like tooling15:38
*** ricolin_ is now known as ricolin15:38
mnaserso users dont have to *deploy* a bootstrap cluster15:38
mnasermagnum will continue to solve a different issue15:38
ttxBut yes... Cluster-API would probably trigger a Magnum 2.015:38
ttxthat would take advantage of ity15:38
ttxAlso, cluster-api is imho farther than 12 months away15:39
mnaserso the only way magnum would be 'alienated': cluster-api running with openstack auth (as part of control plane), when creating a cluster, it uses those credentials to create a cluster in your own tenant15:39
zanebmnaser: you can always spin up the bootstrap VM on openstack itself from the installer (which already has your creds anyway)... I really don't see that being an obstacle to people15:39
mnaser++ it's undergoing a huge "redesign" atm15:39
ttxfrom what hogepodge tells me of progress15:39
mnasernot an obstacle for people but lets say zuul wanted to add k8s cluster support15:40
mnaser1 cluster for each job15:40
zanebit makes more sense to talk about the parts of cluster-api separately15:40
ttxNext steps would be: is there critical mass to maintain Magnum as it stands ? Would there be additional mass to redo it in a simpler way (one that would leverage some recent K8s deployment tooling) ?15:41
zanebthe Cluster part of cluster-api may be some time away and there are disagreements about what it should actually do, and it involves a master cluster to create the workload clusters and blah blah blah15:42
mnaserive personally been experimenting keeping heat as the infra orchestrator, but using ubuntu as the base os + kubeadm to drive the deployment15:42
zanebthe Machine part of cluster-api is here to stay with only minor tweaks, and it's the important part15:42
mnaserbut it involves a lot of change inside magnum that while can be done, i dunno if i can actually getting it to land15:43
mnaserand a lot has changed since magnum existed15:44
mnaserwe have really neat tooling like ansible to be able to orchestrate work against vms (instead of heat and its agents)15:44
mnaserkubeadm is a thing which gives us a fully conformant cluster right off the bat15:44
ricolinCluster API is a way easier and native solution for sure, but can we jump back on what can we deal with magnum's current situation?15:44
ttxricolin: I think we need to reach to current users and see if there is critical mass to maintain Magnum as it stands15:45
ricolinso we need actions for ack users and cores15:46
ttxAlso reach out to thiose who decided not to use Magnum but build their own thing15:46
evrardjpwe are just answering the questions "are there alternatives explaining the disappearance of interest" I think15:46
ttxand ask why15:46
mnaseri think it just doesn't work easily and people give up15:47
ttxI can speak to OVH to see why they did not use Magnum for their KaaS15:47
ttxmnaser: yes that would be my guess too15:47
ricolinagree, we do need a more detail evaluation15:47
evrardjpI can tell why the next suse caasp product is not integrating with magnum, but I am not sure it interests anyone15:47
ttxmnaser: could you reach out to the public clouds that DID decide to use magnum ? You mentioned Citycloud15:47
evrardjpit's not a community project, it's a product15:47
mnasera product can be a result of an upstream community project :D15:48
evrardjpfair15:48
mnaserttx: i can try, but i usually hear the "we're too busy" or "it just works in this one perfect combination we have"15:48
mnaser(until you upgrade to stein and it breaks)15:48
mnaseri guess it hits harder as we are closer to latest so we see these issues crop up before most..15:48
*** altlogbot_1 has quit IRC15:48
ttxmnaser: also would be good to hear from CERN15:49
* ricolin wondering if it make sense to have an full evaluation and ask everyone help to work under a single structure or have APIs to cover all?:)15:49
ttxThey are probably the ones that are the most dependent on it, and they are also teh ones leading its maintenance recently15:50
*** altlogbot_2 has joined #openstack-tc15:50
fungithis is all great discussion, but we really ought to bring this up on the openstack-discuss ml where the maintainers and deployers of magnum can be realistically expected to chime in15:52
ttxyes++15:52
ricolinfungi, +115:52
fungiit might be good if mugsie and evrardjp, as the tc liaisons to magnum, could convince flwang to start that ml thread, even?15:52
ttx1/ ML, 2/ reach out to known users (or known non-users like OVH) and get details15:53
fungiif flwang can't/won't start the discussion on the ml, then one of us can of course15:53
* mnaser has already reached out once (not to start a topic but about the core thing)15:54
mnaserperhaps someone else so im not nagging :)15:54
ricolinI can do it:)15:54
ricolinwe got more close TZ15:54
ricolincloser than evrardjp I believe:)15:55
fungithanks ricolin. to be clear, i do think it would be healthy for the ptl of magnum to start the discussion15:55
evrardjpindeed. I don't mind sending emails though15:55
*** jeremyfreudberg has quit IRC15:55
evrardjpmoar emails15:55
mnaserok great, thanks for coming to my ted talk everyone15:55
evrardjpfungi: that's what I was thinking of -- how do we make that happen?15:55
ttxmnaser: thanks for raising it15:55
mnaserif it doesnt piss off the world, i'll.. work on magnum 2.0 -- publicly15:56
fungiwell, by asking him to start a "current state of magnum" or "what's to come for magnum" or similar sort of thread15:56
mnaserand then we can maybe converge or just have something else15:56
mnaserbecause at this point i've tried to do a demonstrable effort of pushing/fixing patches without much success, so ill probably have a branch somewhere15:57
fungiflwang does at least seem to chime in on some of the recent magnum bug discussions on the ml, so presumably he's able to send messages at least occasionally15:58
ricolinregarding getting user feedback, what actions do we list now?15:59
fungilooks like the magnum "containers" meetings were being held somewhat regularly through april of this year, but they haven't had another logged for several months now. i see some discussion on the ml about meetings but seems they don't get around to holding them16:02
mnaseron my side, i have tried to work with the osf to get magnum certified for the latest releases of k8s (conformant)16:04
mnaserkinda like our powered program16:04
mnaserand i cant really do that because the patches that need to land havent landed16:04
fungiit was certified under an earlier version, right? i vaguely remember something about that16:05
mnaseryes, but i think 1.12 (or was it 1.13 certification) has now expired16:05
mnaserso 'technically' speaking we're certified for an expired thing so we cant really use that (which is no bueno)16:05
*** iurygregory has quit IRC16:20
hogepodgeSorry to drop in late on this16:21
hogepodgePart if it's coming up because we lost our K8s certification about a month ago.16:21
hogepodgeI was working with mnaser (who was doing all the work really tbh) to run the conformance tests and reestablish that certification, and we honestly thought it would be a quick job. It wasn't, and its in part because magnum needs to be updated to work with recent K8s releases.16:24
hogepodgeA few points on the discussion. With the success that CERN is having with Magnum, managing hundreds of clusters with it, I think it's a valuable project. Installing Kubernetes securely and easily is a problem, and Magnum provides an API that is a solution to that problem.16:25
hogepodgeWe need to think of Cluster API as a tool for installing K8s also. It's still in development and not stable yet, but it will be stable with basic functionality soon. If the project leaders are successful, it will become the preferred tool for installing and managing K8s clusters on any cloud, and will add other valuable features like auto-scaling and auto-healing that should be provider-independent.16:26
hogepodgeSo if Magnum is to continue, at some point in the future it would be to benefit of the project to use Cluster API as the orchestration tool under the hood. It's not an either-or proposition.16:27
*** lpetrut has quit IRC16:29
hogepodgeBut if as a community we decide there's more value to user-managed clusters using the deployment tools out there and Magnum isn't providing value for our users, that's fine. We should support efforts to maintain openstack provider integrations and cluster management tools though.16:31
*** ricolin has quit IRC16:39
*** diablo_rojo has joined #openstack-tc17:11
*** jaypipes has quit IRC17:18
* dhellmann apparently missed an epic office hours18:01
dhellmannmnaser : another option to consider is just having the TC add some core reviewers to the magnum team, if the existing team is not responsive. That would obviously have to come after starting the new mailing list thread.18:02
fungiyep, i totally see that as a possible step, but only after there's been community discussion18:04
*** dklyle has quit IRC18:11
*** dklyle has joined #openstack-tc18:12
*** jamesmcarthur has joined #openstack-tc18:17
*** dims has quit IRC18:19
*** dims has joined #openstack-tc18:29
*** lbragstad has quit IRC18:51
*** mriedem has quit IRC18:54
*** mriedem has joined #openstack-tc19:03
*** tosky has quit IRC19:10
hogepodgemnaser: it doesn't help much because I'm not core, but I went and reviewed the entire patch backlog on magnum19:15
hogepodgeto my eye there's a lot of pretty basic maintenance stuff that should be no problem to merge19:15
*** jaypipes has joined #openstack-tc19:16
*** dims has quit IRC19:30
jrosserre. magnum from a deployer perspective has been a hard journey, just about made things work in rocky fixing a bunch of bugs along the way and chasing reviews to get bugfixes backported - i figure not many folks will follow it through so persistently with contributions19:38
*** dims has joined #openstack-tc19:39
*** e0ne has joined #openstack-tc19:39
jrosserthen i've had to revert a bunch of broken code out of the stable branches which should never have been merged to master imho, those reverts are largely still not merged19:39
*** jamesmcarthur has quit IRC19:40
jrossergiven it's all broken in stein, i'm sad to say that my users are now using rancher instead19:40
*** jamesmcarthur has joined #openstack-tc19:41
dhellmannI'm not sure it's absolutely necessary for us to provide openstack APIs to do *everything*19:42
fungiespecially if there are other api services which can be run alongside openstack, and especially especially if they can share resources (authentication, block storage, networks, et cetera)19:44
*** jamesmcarthur has quit IRC19:46
evrardjpdhellmann: agreed with you20:05
evrardjpit's not my approach to maintain an API if a client I don't have to maintain does an equivalent feature...20:06
evrardjprather help maintain the client instead than building my own thing20:06
*** jamesmcarthur has joined #openstack-tc20:11
*** jamesmcarthur has quit IRC20:19
*** jamesmcarthur has joined #openstack-tc20:30
*** diablo_rojo has quit IRC20:40
*** mriedem has quit IRC20:52
*** mriedem has joined #openstack-tc20:53
*** jamesmcarthur has quit IRC21:00
*** jamesmcarthur has joined #openstack-tc21:01
*** jamesmcarthur has quit IRC21:13
*** diablo_rojo has joined #openstack-tc21:19
*** whoami-rajat has quit IRC21:28
hogepodgejrosser: I saw the reversions. Those are failing the zuul gate it looks like21:34
hogepodgedhellmann: sure, the issue right now being is we have a high profile user running a lot of clusters. If I had to guess why patches haven't been merged, it's because we're outside of the academic season and also pushing against European holidays.21:36
hogepodgeIn the short term I would support promoting some trusted people to core and letting them merge patches, especially the easy ones (there are a bunch of cleanups) and the critical ones that have been production tested, then reevaluating what the path forward is once the core team members are back21:38
*** jamesmcarthur has joined #openstack-tc21:46
*** jamesmcarthur has quit IRC21:51
*** e0ne has quit IRC21:56
dhellmannhogepodge : ok, but "it's a bad time of the year" seems like a very good reason for us to push to expand the size of that team, too22:07
dhellmannthe sun doesn't set on openstack, and all that22:07
*** ijolliffe has quit IRC22:36
*** jamesmcarthur has joined #openstack-tc22:38
*** jamesmcarthur has quit IRC22:44
*** mriedem has quit IRC23:18
*** jamesmcarthur has joined #openstack-tc23:20
hogepodgeoh yeah, definitely23:22
*** jamesmcarthur has quit IRC23:24
*** tjgresha has quit IRC23:31
*** jamesmcarthur has joined #openstack-tc23:50
*** jamesmcarthur has quit IRC23:55
*** smcginnis has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!