Tuesday, 2020-02-11

*** slaweq has quit IRC00:04
*** jamesmcarthur has joined #openstack-tc00:14
*** jamesmcarthur has quit IRC00:19
*** jamesmcarthur has joined #openstack-tc00:24
*** jamesmcarthur has quit IRC00:29
*** tetsuro has joined #openstack-tc01:33
*** david-lyle is now known as dklyle02:05
*** jamesmcarthur has joined #openstack-tc02:09
*** jamesmcarthur has quit IRC02:28
*** jamesmcarthur has joined #openstack-tc02:31
*** jamesmcarthur has quit IRC02:42
*** tetsuro has quit IRC04:02
*** tetsuro has joined #openstack-tc04:12
*** tetsuro has quit IRC05:24
*** tetsuro has joined #openstack-tc05:25
*** evrardjp has quit IRC05:34
*** evrardjp has joined #openstack-tc05:34
*** tetsuro has quit IRC05:37
*** lpetrut has joined #openstack-tc07:03
*** witek has joined #openstack-tc07:56
*** e0ne has joined #openstack-tc07:57
*** e0ne has quit IRC08:00
*** tosky has joined #openstack-tc08:27
*** e0ne has joined #openstack-tc08:42
*** e0ne has quit IRC08:44
*** rpittau|afk is now known as rpittau08:55
ttxohai09:00
mnasero/09:04
mnaseri've been having crazy ideas lately and wondering if many of our *aaS projects can benefit in some way or another by being integrated directly on top of k8s09:05
mnaserwe seem to be pretty good at delivering *aaS -- k8s is pretty good at scheduling lightweight controllers09:05
mnaserso something like manila (or even trove) can beenfit from that whole existing ecosystem and all they provide is a nicely integrated API to deliver these blocks of infrastructure09:06
*** slaweq has joined #openstack-tc09:07
ttxI think that was the thinking behind standalone Cinder09:09
ttxand Kuryr09:10
ttxor are you talking about something else?09:10
mnaserttx: yeah, something similar to that, but making services both functional as standalone but also leveraging the k8s system09:14
mnaserfor example: instead of octavia creating service vms inside nova, it can simply schedule containers inside a k8s cluster that's properly configured (with cinder/kuryr/neutron to be able to provide a 'native' experience)09:15
mnaserand that way amphoras are not these heavy images but they're just docker images that can be pulled anytime/anywhere09:15
ttxCould be containers that either run in a pod or a VM, and support both "native K8s" and "native openstack"09:18
mnaserright, i think that might simplify the deployment experience so much more too (and the maintenance story too)09:18
mnaserit's really difficult right now to manage those service VM services, they've historically been one of the hardest to deploy/manage09:19
ricolino/09:22
mnasera wild idea spinning off of this is much of the openstack services perhaps being k8s operators which provide both a native k8s api and native openstack api09:25
mnaserand given we have possiblity of keystone auth in k8s, you could potentially view your resources via 2 different apis09:25
ttxthat... might be a bit too wild09:29
mnaseri need to come up with really wild things so that the less wild ones seem more sane09:30
mnaser=P09:30
ttxhaha09:33
ttxI suggest we rewrite nova in serverless where the function would just spin up a vm.09:34
mnasernow you're making mine seem reasonable09:34
mnaserthank you09:34
ttxI call it serverlesserver09:34
evrardjpreading scrollback09:34
mnaseri wonder how hard it would be to poc this for something that's relatively simple like 'cache as a service' for something like memcache09:34
evrardjpmnaser: yeah that's been what we've discussed with mugsie and zaneb09:35
ttxAlso we should write it in Rust because Go is so yesterday09:35
evrardjphaha09:35
evrardjpnot sure licensing is appropriate ;)09:35
mnaseri mean i'd be up for it to be in go but not sure how many other people would be excited about that :)09:36
mnaserhaving static builds and no dependency mess is really, really nice.09:36
evrardjpbut yeah, I think the most successful services forward will be the ones that not only work inside the whole traditional openstack, but also live outside it (standalone), like ironic, cinder, or manila. But hey I am no oracle :p09:36
evrardjpmnaser: there is still a dependency mess09:36
evrardjpit's just hidden somewhere else09:37
ttxand a supply chain mess09:37
evrardjpabout that I was analysing some things :)09:37
evrardjpshould we have an openstack OS? :D09:37
mnaseri dunno, i've enjoyed it and haven't had too many bad experiences with go09:37
ttxon topic: https://kubernetes.io/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/09:40
evrardjpI should probably give a link to our suse thing doing the same :p09:41
mnasernjeat09:41
mnaserneat09:41
evrardjpI obviously won't09:41
mnaserttx: the idea is to leverage all of that. so for example, add kuryr to the mix and you can easily have a 'memcache as a service'09:42
mnaserand i hate to say it but getting a k8s cluster up is a hell of a lot easier to do that openstack, so i dont know if people will be that freaked out y it.09:43
evrardjpnot sure to understand that last sentence.09:43
mnaserif our *aaS projects can be simplified by having access to a k8s cluster09:44
mnaseri think that's a relatively trivial bridge to jump to get a lot of value09:44
evrardjpoh you mean exposing the openstack services inside k8s?09:44
evrardjpthat's the scope of openstack cloud provider integration  ...09:45
mnaserlet me try again09:45
mnaser"if you want to use openstack's memcache-as-a-service, you will need a kubernetes cluster that's integrated with your existing openstack cloud using the cloud provider / kuryr / etc"09:45
mnaserbecause it will be deploying pods on top of that09:46
mnaseri'm thinking at the end of the day, openstack's users are those who want *aaS.  k8s users are those who are building things on top if t09:46
mnaserhistorically, we've called ourselves a 'cloud operating system' so we might as well as deliver on that in the best way possible09:46
evrardjpwait, so you mean rewiring APIs to basically trigger operators?09:47
mnaserpossibly, something along those lines09:47
evrardjplet me rephrase this09:47
mnaserhence my comment earlier of: some of our services can become k8s operators that rely on a cluster that is deeply integrated with the core openstack services such as neutron/cinder09:47
evrardjpwell, I guess I don't need if you said so09:47
mnaserand yet deliver an openstack api experience (and by second nature, a k8s native one too)09:48
evrardjpmnaser: I think what matters here is that we don't have to many *aaS -> Some on which it's more appropriate to be in k8s should be moved there09:48
mnaserbecause the openstack api experience (a normal rest api) will end up just creating those resources which are then managed09:48
evrardjpis that what you're saying?09:48
ttxSo.. a brand of k8s operators that would depend on Kubernetes being run on openstack cloud provider ?09:48
mnaseri think we have some *aaS that benefit a lot of being operators imho09:49
mnaseryes, but more than openstack cloud provider, also things like kuryr09:49
mnaserso you can access them for example from your VMs09:49
mnaseror expose them via floating ips09:49
ttx"operators for openstack-backed kubernetes" ?09:49
mnasercorrect09:49
ttxgod I hate that operators name09:49
mnaserand there's a ton of projects that could fit that criteria09:49
evrardjpttx: what mnaser is looking for is to say (for example) "I provide you db aaS, and you can reach it from anywhere, vms or k8s cluster. And behind the scenes, it's on k8s".09:50
ttxEven koperators would have been better, and I also hate the knaming kmania09:50
evrardjpmnaser: did I get that right on your user perspective?09:50
evrardjpttx: haha09:50
mnaserevrardjp: somewhat correct, id drop the 'or k8s cluster'09:50
evrardjpbut this is kool09:50
mnaserbecause we can expose a k8s api to add/remove/manage09:50
mnaserbut not to run any workloads09:51
mnaserthat's another story09:51
evrardjpyup09:51
mnaserso you use rbac rules to limit only for those 'operator managed' resources09:51
evrardjpI didn't mean that the thing had to run inside :)09:51
evrardjpjust reachable/provisonable09:51
evrardjpbut yeah09:51
mnaseras an operator (the human, not the k8s concept), this makes like so much easier09:52
mnaserand it opens up a whole bunch of useful things where someone might say "openstack is neat but i like their memcache operator so i'll just deploy that on my k8s and contribute fixes that i need"09:53
evrardjpthis/these new *aaS API can probably be scaffolded09:53
mnaserit feels a little weird, building tools for k8s, but in a way it opens up value beyond openstack and potentially other contributors09:53
mnaserobviously at the heart of this remains our core stuff: nova/neutron/cinder/etc09:54
evrardjpI think this would be nice to write in an email on the ML and distill that in the ideas repo! :)09:55
evrardjpsee if that gets traction09:55
mnaseri think i'll wait until NA wakes up and people see the insanity that i'm bringing up and hear what they have to say =P09:56
*** slaweq has quit IRC09:56
*** e0ne has joined #openstack-tc09:57
evrardjpI have serious doubts on the willingness to maintain an api for a translation layer09:58
evrardjpwell it's not only translation but...09:58
mnaseryou dont need an api for translation layer09:58
evrardjpttx: about your questions on ideas, indeed the intention was that anyone could merge09:58
mnaseryou just need an api that creates k8s cr's09:58
evrardjpthat's what I meant09:59
mnaserprobably relatively trivial given our current openstack tooling imho09:59
ttxevrardjp: ok, so it's more of a community wall ? And core reviewers are just checking that it's an idea and not spam but would otherwise not question the content?10:00
evrardjpprobably not hard indeed10:00
evrardjpttx: correct10:00
evrardjpit's written in the documentation of the repo10:00
ttxevrardjp: OK, so it's a bit like how we (I) handle openstack-planet or irc-meetings10:00
ttxWill +1 all, sorry for having held it10:01
evrardjpthe review should also reflect what's being discussed on the ml10:01
evrardjpso a simple check10:01
evrardjpbut yeah, not questioning the content10:01
evrardjpif you could tox -e docs on the ideas repo, and xdg-open doc/build/html/index.html you should see :)10:01
evrardjpor wait until this whole thing gets merged :)10:02
*** rpittau is now known as rpittau|bbl11:21
*** fungi has quit IRC11:24
*** fungi has joined #openstack-tc11:27
*** openstackgerrit has joined #openstack-tc11:47
openstackgerritJean-Philippe Evrard proposed openstack/governance master: Introduce 2020 upstream investment opportunities.  https://review.opendev.org/70712011:47
*** ricolin has quit IRC12:10
*** jaosorior has joined #openstack-tc12:15
*** jaosorior has quit IRC12:32
*** adriant has quit IRC13:02
*** adriant has joined #openstack-tc13:03
*** jamesmcarthur has joined #openstack-tc13:15
openstackgerritJean-Philippe Evrard proposed openstack/governance master: Add QA upstream contribution opportunity  https://review.opendev.org/70663713:16
*** rpittau|bbl is now known as rpittau13:22
*** jamesmcarthur has quit IRC13:24
*** jamesmcarthur has joined #openstack-tc13:25
gmanno/13:27
*** jamesmcarthur has quit IRC13:28
*** jamesmcarthur has joined #openstack-tc13:28
*** jamesmcarthur has quit IRC13:35
openstackgerritNate Johnston proposed openstack/governance master: Add QA upstream contribution opportunity  https://review.opendev.org/70663713:35
njohnstono/13:35
smcginnisThere are a lot of folks running k8s, but also a very large set not running it with no interest in it.13:39
tbarronI think mnaser evrardjp and zaneb among others know my interest in running manila "share servers" per-tenant in dynamically spawned containers13:39
tbarronso k8s orchrestrated13:39
smcginnisSo that means integration with k8s could be very useful to some end users, but it would greatly complicate projects because they would need to handle both. So it's and AND and not an OR. Or not even an OR if I followed the conversation.13:40
tbarroni'm actually less interested in "manila standalone" if that's like what jgriffith did for "cinder standalone", i.e. no-auth w/o keystone13:40
mnaserthe idea is that these projects are entirely changed in architecture in that its how its built on top of k8s overall13:40
tbarronI want keystone multi-tenancy in the API, a majore deficit in k8s for those who think k8s plus metal3 plus kubevirt would be sufficient by itself for an IAAS13:41
mnaserand the apis dont do much more than create a k8s resource that then creates it, its a huge shift13:41
mnaserobviously i imagine that we'd do some mapping where project => ns or something along those lines13:41
mnaserbut i think the value we get out of it means ripping out a *crapton* of code (like the generic driver in manila, amphora vm management/deleting/etc in octavia, etc)13:42
tbarronso the direction of https://github.com/tombarron/pocketmanilakube is to run manila and keystone under k8s13:42
mnaserand realistically most of this will probably be abstracted in the deployment tools13:42
mnaserno one wants to run rabbitmq but they all do anyways :P13:42
tbarronzaneb has suggested getting the manila services to use grpc instead of rabbitmq13:42
mnaser++13:43
*** jamesmcarthur has joined #openstack-tc13:43
tbarronreally the only dependencies are rpc/messxaging across services, database, and keystone13:43
tbarronmnaser: that means no generic driver of course with its cinder, neutron, nova dependencies13:44
mnaserright but it would abstract it to say.. csi stuff, which cinder has an impl and these services would run on an "openstack-ified k8s" which has those integrated and plumbed a few layers down13:44
smcginnisA/2313:46
tbarronmnaser: yeah, i'm not saying throw out cinder, just run cinder-csi and manila-csi and cinder and manila without cross-dependencies13:48
mnaserand hell, if someone wants to use someting else under it, why not13:49
*** ijolliffe has joined #openstack-tc13:52
zanebmnaser: I wish to subscribe to your newsletter :)14:03
fungiand here i thought i was going to wake up to a proposal for rewriting swift in swift14:05
ttxswiftly14:05
mnaserSoS would be accurate14:05
fungiif only all of our services could have programming languages named after them14:06
fungihttp://navgen.com/nova/14:06
mnaserhttps://github.com/the-neutron-foundation/neutron-language14:07
fungisee, apparently they do14:08
smcginnisCinder has a book, does that count? https://www.goodreads.com/book/show/36381037-cinder14:08
fungiwhy did i never think of this before?14:08
smcginnis"the tale of a teenage cyborg who must fight for Earth's survival against villains from outer space14:08
smcginnisBeat that manila.14:08
zanebrofl14:08
fungiwell, there's also a c++ visualization lib named cinder, but i'm still digging deeper14:08
fungigranted, that book is hard to top14:08
zanebmnaser, evrardjp: re dependency management in golang: without go modules it's at least as/more painful than in Python. (I have high hopes for go modules, but not tried it out yet.) It does at least put it entirely on the developer and not make it the deployer's problem, so that's not nothing14:08
mnasergomodules have been nothing but a joy personally14:09
mnaseryou dont have to wake up to know every deployment ever that didnt pin virtualenv break one day :)14:09
fungiat least there are workarounds14:10
tbarronsmcginnis: well you didn't have the smarts to pick an obscure skunk as a mascot, or to name yourself after an office folder only used by previous generations14:10
zanebmnaser: it does indeed look better once you can migrate to it14:11
tbarronmnaser: go mod is useless automation that ruins requirements' teams' job security14:11
mnaser:p14:11
tbarronseriously for manila-csi and nfs-csi (just switched from go dep) they seem to be a nice simplification14:13
zanebtbarron: from go dep or just dep?14:15
tbarronzaneb: go dep14:16
zanebyou just skipped right over dep? :)14:16
zaneblet's hope they got it right on the third try14:17
zanebbecause Python certainly didn't14:17
* fungi still uses manila file folders to organize his personal papers14:19
* fungi gets the feeling he's being called "old"14:19
mnaserok so it seems like kuryr recently added support for the ability to connect multiple vifs.. but more importantly to a subnet defined in run-time14:33
mnaseri keep going back to 'memcache as a service' cause it seems like the inital trivial one14:33
mnaserit means that you can probably give it a subnet id and then rely on kuryr to plumb the memcache containers correctly (and make sure they're configured via env properly)14:34
mnaserlooks like we literally have all the right tools available today :)14:35
tbarronzaneb: i may not know what i'm talking about, manila-csi was built using 'go mod'; csi-driver-nfs just switched from using 'Gopkg.lock' and 'Gopkg.toml' to go modules14:37
zanebyeah, I think 'Gopkg.lock' and 'Gopkg.toml' are dep. I never actually used go dep so I don't know how that worked14:39
mnaseri am fairly sure go modules make use of go.mod files14:40
zanebyes, go mod is the latest (and best) and uses go.mod files instead14:41
*** witek has quit IRC14:41
tbarronagree, my ignorance was exactly what we were moving *from* with the Gopkg.lock and Gopkg.toml14:43
zanebyou know, it's entirely possible that I made up the idea that the thing before dep was called go dep14:47
zanebanyhoo14:47
zanebmnaser: I agree with large parts of your idea. I think the future of cloud (especially public cloud) is in managed services (like DBaaS), and that by far the easiest way to run those is going to be on k8s. we've been trying for nearly a decade to get Nova to support the stuff that e.g. Trove needed to do that; Nova just never cared about that use case and now the world has moved on14:55
zanebmnaser: I also think the future of cloud is on bare metal, and I think that's where we disagree. but I think a lot of the things we're both thinking about are common to both. I'm very interested in finding ways to do managed services that can work in either context14:57
mnaserzaneb: good to hear15:00
mnaseri'd like to personally explore the possibility of what this might look like, so i'm personally going to: a) write a simple standalone memcache operator that works with vanilla k8s, b) build a k8s cluster that integrates with neutron using kuryr, c) iterate on (a) to support plugging into specific subnets as a user, d) create a very simple wrapper api that creates resources on behalf of a user with a restful api (to15:02
mnaserwhere operator picks things up afterwards)15:02
mnaserbecause i think it makes sense to see what that might really look like in practice and maybe share that poc for other to reproduce/try out15:02
zanebmnaser: you could save time by just using an existing operator15:03
zanebI don't see one for memcache15:04
zanebbut you could do e.g. etcd or something15:04
mnaserzaneb: that's actually a reasonable thing, the main reason behind having access to an existing one is finding a way to add our openstack-y bits on top of it (i.e. a subnetId field for example)15:04
mnaserbut i think i can get away with that by using a very lightweight operator that just does all this on top of it15:05
zanebyeah, that's what I would do15:05
mnaserso it creates etcd resoures15:05
* mnaser scrolls through https://operatorhub.io/15:05
zanebbecause eventually you'll want to extend this to running any kind of service that the cloud operator can install an operator for15:05
* zaneb also hates the name 'operator'15:05
mnaseri wonder if i can get away with actually just embedding other operators into one big openstack operator15:06
mnaser(i've done this in the past and it worked pretty nicely)15:07
mnaserso you just have 1 run one pre-integrated operator rather than 1 "openstack" operator and 1 "per service"15:07
mnaser(all you do is register the controllers when you start up and you're good to go)15:07
zanebfor running OpenStack services themselves, you mean?15:07
mnaserno for example a: openstack-operator which embeds memcache-operator and postgres-operator etc15:08
mnaserso you dont need to run openstack-operator + all of these other ones, it's just one big operator that manages all of the CRs15:08
zanebah, right15:08
zanebpotentially15:09
evrardjpSome colleagues proposed this kind of meta operator , I am not too fond of it. I don't see it more valuable to deal with the lifecycle of the child operators than dealing with them directly in some kind of tooling15:11
evrardjpelse your 'openstack-operator' becomes very complex.15:12
evrardjpBut I would be glad to be proven wrong.15:12
mnasergoing back to the deployer experience15:12
mnaseri think they'd rather have 1 much easier deployed with sane defaults operator than our historical million choices15:13
evrardjpand in it, just boolean to flip if you want *insert child operator * ?15:13
mnaseror it always runs and if you want to disable it you use rbac15:14
mnaseri wonder how much time it could take me to get something like trove's api "re-implemented" with postgres-operator for example15:16
evrardjpI am not sure the user experience will be better than k8s service catalog.15:19
evrardjpor maybe I misunderstand you15:20
evrardjpI guess it depends on what you want the users to see.15:20
mnaserusers would see openstack only resources15:20
mnaserthey wouldnt see the stuff plumbed beneath it15:20
johnsommnaser: FYI, there are still some major limitations in the k8s scheduler for infrastructure type applications like load balancing.15:20
mnaserjohnsom: oh? curious as to hear what are some of them for example?15:21
johnsomWe have been following it closely, but there is a lot of pushback for what we need out of the scheduler15:21
mnaserjohnsom: any examples that stick out?15:22
*** jamesmcarthur has quit IRC15:23
johnsomFYI, I am about 30 minutes out from being in the "office" and my first cup of coffee so bear with me.15:24
evrardjp:)15:24
johnsomThe most important one is controlling the scheduler from "evacuating" infrastructure services from pods. I.e. currently there is no way to stop k8s from killing a instance to "move" it to another pod for workload balancing.15:25
mnaserahaha15:25
johnsomThis means you can't stop it from interrupting network flows, like downloading a large file, mid-flow.15:26
*** jamesmcarthur has joined #openstack-tc15:26
mnaserhmm15:26
johnsomThey have added the new priority concept, but it still doesn't preempt this.15:26
johnsomThe other issue we see, but there are (granted ugly) ways around is the layers of packet translation required with the k8s pods.15:27
mnaserah yes, well with my (concept) you'd have your pods plugged directly into the neutron port15:27
mnaserso it is not much different than a vm15:28
johnsomRight. That is the best answer, but not straight forward with k8s15:28
mnaseri feel like there should be a solution to the problem of "a long running task" esp in terms of networking15:29
johnsomFYI, we have had discussions about this at a few PTGs, so there are also notes on our etherpads.15:29
mnaserour amphoras today could technically run into the same issue in other ways15:29
fungiand presumably with network devices like load balancers you need more than one logical port (or some nasty hairpinning/dsr solutions)15:29
mnaserfungi: or the load balancer lives on the same network15:29
mnaserand floating ip attach to get it to an external net15:29
fungi"the same network" makes no sense in a routing context. there are numerous networks15:30
johnsomNot so much, the service VM approach is very stable as we have some level of control. Plus, we can implement HA technologies that allow very fast failover that you can't with "traditional" k8s15:30
zanebjohnsom: such as? I curious what HA stuff is not possible with k8s15:30
zaneb*I'm15:31
johnsomfungi Correct, there are issues with "one-armed" or hairpin from a performance perspective.15:31
johnsomThis is an issue with the "trunking" trick like kuryr15:31
fungioh, or do kubernetes-land load balancers nat the source as well as the destination? i guess that's another way around it15:33
fungistill ugly and prone to overflowing pat tables15:33
johnsomzaneb I didn't say "not possible" (I'm not much of a believer), but requires some non-k8s patterns.  Specifically I am talking about using VRRP with something like keepalived to migrate IP addresses quickly (we are currently at around a second, but moving sub-second in a release or two).15:33
fungidid the vrrp patents eventually expire?15:33
johnsomfungi Yes, there is NAT involved with the current, limited solutions. But right now, most LBs are sitting outside the k8s scheduler.15:34
johnsomfungi That is long since gone and CARP is dead. lol15:34
fungier, right, it was cisco asserting the hsrp patents covered vrrp implementations15:34
*** witek has joined #openstack-tc15:35
zanebjohnsom: are you familiar with MetalLB? (not as a replacement for L7 load balancing, obviously, but for managing failover of traffic into the L7 load balancer it seems like a viable approach)15:35
johnsomOur goal in an HA solution is to be able to failover inside the TCP retry window.15:35
zanebfungi: "kubernetes-land load balancers" consist mostly of 'let the cloud do that for us'15:35
johnsomzaneb I am!  It also lives outside the k8s scheduler15:35
zanebtrue15:36
fungizaneb: sure, i'm wondering more what "the cloud" is doing in those scenarios (especially when it's not openstack)15:36
johnsomIt is pretty good actually15:36
evrardjpyup15:36
johnsomI assume you all have seen the new cloud management service added to k8s?15:36
johnsomBasically, yes, they are acknowledging that there are useful cloud infrastructure services that live outside the k8s scheduler.15:37
fungiyep! the official answer to how to manage your infrastructure for kubernetes seems to be to have it install openstackl15:37
zaneb*gasp*15:37
johnsomhttps://kubernetes.io/docs/concepts/overview/components/#cloud-controller-manager15:37
mnaseri'm sure there might be roadblocks or specific services where it might make sense to maybe have 1 or the other15:37
mnaserand i think k8s as a platform will probably slowly develop to maybe adjust to those patterns15:38
mnaseresp with how quickly things are moving15:38
mnaserit's sad to see sometimes but kinda amazing to see the huge amounts of fast progress that goes there15:38
fungii wouldn't count on them moving quickly indefinitely. their activity curve basically looks like ours but shifted a few years15:38
*** jamesmcarthur has quit IRC15:38
johnsomYeah, really, we have been constantly searching for a workable container story. K8s is getting there, but still has some issues that make it less than ideal for an infrastructure service. The health monitoring and pod spin up and down are the major issues.15:39
fungiand lf/cncf have also basically stopped touting yearly activity metrics and switched to cumulative (conveniently that always increases)15:39
johnsomIt does, very similar growing pains as well15:39
zanebfungi: it's a question of what's going to eat their lunch the way they ate our lunch. I don't see anything on the horizon15:39
fungiwell, they didn't eat our lunch. they siphoned off the distracting vendors trying to cram things down our throats so they could advertise their name on something trendy15:40
mnaser^^15:40
mnaserthose two arrows were for zaneb :p15:40
mnaseras someone who's been interacted  a lot with k8s, my "i need to get an app up" needs are much better served with plenty of tooling around it15:41
fungilast i saw their activity is already heading for a trough, and i don't know what specifically is "eating their lunch" if that's what you like to blame hype trends on... ai/ml? edge?15:41
zanebfungi: they did, and they also siphoned off all of the application developers who were still hoping that we were going to build a thing that they could build applications against15:41
fungiahh, yeah, luckily i don't think that should have been in scope for openstack anyway15:42
zanebwell, you have plenty of company15:42
mnaserwell turns out people want *aaS and openstack promised that and we struggle to deliver15:42
zanebbingo15:43
mnaserand k8s doesnt care about delivering that so that's a really good space for us to leverage their tooling15:43
mnaserso we can deliver *aaS better and get those users those infrastructure blocks they need15:43
johnsomThere have been a lot of people burned by their k8s deployments as well.....15:43
fungii personally think the curve is more of a sociological pattern, where technologies become less interesting as they're entrenched and as they start to work well and people are no longer talking about them as much because they can take them for granted15:43
johnsomIt is hard to shift your brain and re-architecture to use that environment properly15:44
zanebfungi: there were 12k people at KubeCon and they were talking plenty15:44
zaneband mostly users, not just people trying to sell stuff to each other15:45
johnsomDoes anyone know if there are OpenStack people starting work on the new cloud provider framework for k8s? (new as of 1.16)15:45
fungithat's awesome. i hope they can turn those users into code contributors15:45
fungi(some significant fraction of them i mean)15:46
*** jamesmcarthur has joined #openstack-tc15:46
fungikubernetes represents a large benefit to our community, so i'd like to see them continue to be able to maintain it15:46
zanebjohnsom: what part is new in 1.16?15:47
johnsomzaneb https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/20190422-cloud-controller-manager-migration.md15:48
johnsom"Support a migration process for large scale and highly available Kubernetes clusters using the in-tree cloud providers (via kube-controller-manager and kubelet) to their out-of-tree equivalents (via cloud-controller-manager)."15:49
fungijohnsom: diablo_rojo may know, she has stepped into that group after hogepodge moved on15:49
johnsomCool, thanks.15:50
zanebjohnsom: isn't this it? https://github.com/kubernetes/cloud-provider-openstack15:50
zanebafaik ours was split out a long time back already15:50
johnsomzaneb If I am following the k8s correctly, that is the "old" code/way15:50
johnsomAh, no, it looks like you are right, this already is using the cloud manager15:51
johnsomThanks, there is my answer15:51
fungiand yes, that's teh effort hogepodge pushed in sig-cloud-provider15:53
zanebhttps://github.com/kubernetes/enhancements/issues/669#issuecomment-53727711115:53
fungibasically removing all the in-tree providers and separating them into their own repositories15:53
fungiopenstack's got done quickly, i think it was the first to complete extraction, some of the others took much longer15:55
*** KeithMnemonic has quit IRC15:56
*** lpetrut has quit IRC16:01
*** cgoncalves has joined #openstack-tc16:05
diablo_rojofungi, johnsom catching up...16:06
fungidiablo_rojo: i think we got the answer, it was about the (now relatively old and complete) effort for extracting the cloud providers from the kubernetes source tree16:06
johnsomdiablo_rojo: I think I got my answer already.16:07
diablo_rojoAh yes. I think that is still ongoing. The plan was for most providers to be extracted from tree and have a beta by the end of the next release?16:07
fungibut as for the openstack provider it was done with extraction over a year ago if memory serves16:08
diablo_rojoEhhh it might be extracted, but I dont think it has feature parity with what was in tree.16:09
zanebthe OpenStack one is marked as 'implemented' as of last week https://github.com/kubernetes/enhancements/pull/1492/files#diff-c5fdd15e8c9a42196844891e9726a417R1716:09
diablo_rojozaneb, thank you!16:09
zanebdiablo_rojo: I didn't do it ;)16:10
diablo_rojozaneb, but you found the link which is helpful :)16:10
smcginnisSince there are more tc-members here now, just raising again that this week was reserved for any discussion or needs the TC has before starting the official W naming poll.16:18
smcginnishttps://wiki.openstack.org/wiki/Release_Naming/W_Proposals16:18
ttxsmcginnis: Could you start a ML thread on them?16:18
ttxLots of good names16:19
ttxPotential cultural sensitivity on Wuhan (positive or negative I don't know)16:20
ttxAlso Wodewick if it's indeed perceived as a joke on speech-impaired people16:21
ttxPretty sure Wakanda will be striked out on trademark review, but that's not really our role to anticipate that16:21
ttx(probably same for Wookiee -- the film industry is well known to protect their names fiercely)16:22
ttxI guess Whiskey/Whisky could also be seen as culturally problematic (promote alcohol consumption)16:27
fungistein and icehouse were technically street names16:29
smcginnisttx: ML post sent.16:29
ttxOK will repeat that early feedback there16:29
gmannfungi: zaneb diablo_rojo johnsom : ++ few points on in-tree and separate openstack-provider in k8s.  continuous testing was fixed and enabled now along with well documented  with this- https://github.com/kubernetes/kubernetes/pull/85637#issuecomment-58430849016:35
gmannit is marked as beta also with this. and ask people to migrate from in-tree one  - https://github.com/kubernetes/kubernetes/pull/8563716:35
gmannnot sure how easy/tough the migration is.16:36
*** e0ne has quit IRC16:40
*** e0ne has joined #openstack-tc16:41
*** slaweq has joined #openstack-tc16:47
*** e0ne has quit IRC16:51
*** rpittau is now known as rpittau|afk17:00
*** gmann is now known as gmann_afk17:20
*** evrardjp has quit IRC17:34
*** evrardjp has joined #openstack-tc17:34
*** jamesmcarthur has quit IRC17:40
*** e0ne has joined #openstack-tc17:42
*** e0ne has quit IRC17:58
*** gmann_afk is now known as gmann18:49
*** e0ne has joined #openstack-tc19:15
*** e0ne has quit IRC19:24
*** e0ne has joined #openstack-tc21:19
*** jamesmcarthur has joined #openstack-tc22:06
*** ijolliffe has quit IRC22:14
*** slaweq has quit IRC22:25
*** jamesmcarthur has quit IRC22:41
*** witek has quit IRC22:47
*** slaweq has joined #openstack-tc22:49
*** iurygregory has quit IRC22:52
*** slaweq has quit IRC22:55
*** e0ne has quit IRC22:58
*** jamesmcarthur has joined #openstack-tc23:02
*** jamesmcarthur_ has joined #openstack-tc23:05
*** jamesmcarthur has quit IRC23:05
*** slaweq has joined #openstack-tc23:11
*** slaweq has quit IRC23:16
*** jamesmcarthur_ has quit IRC23:24
*** jamesmcarthur has joined #openstack-tc23:25
*** jamesmcarthur has quit IRC23:25
*** jamesmcarthur has joined #openstack-tc23:26
*** jamesmcarthur has quit IRC23:32

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!