Thursday, 2023-03-23

tobias-urdinyou're both right, i didn't actually think about it from the perspective that tooling is the differentiator but it makes sense, but when everyone is doing such a massive shift at the same time without collaboration just makes me wonder. i guess we, as everything else in tech, are bound to reinvent the wheel a couple of times :)08:36
noonedeadpunkTo be frank I still find weird to go openstack k8s sandwich, I still don't see any decent advantage of deploying openstack on top of k8s09:21
noonedeadpunkyou can't really autoscale, you have ton of exceptions and fighting with windmills (like with computes) and then the only issue you're solving - is already solved decade ago with systemd.09:22
noonedeadpunkyou can say - gitops config, but for that you still need to have a CI/CD system and review process, so you already have all tools you need to do that without k8s...09:24
noonedeadpunkThe only point of that is jsut marketing as you can say fancy words about your deployment tool09:25
noonedeadpunkAnother one, which is probably quite speculative - it might be easier to hire stuff for supporting and developing your tool... but dunno...09:26
bauzasI have an non-corporate opinion about the reasonins09:38
bauzasreasonings*09:39
bauzasthis is called 'lock-in'09:39
bauzasthe more value you propose of your deployment tool, the better you commit your customers to appreciate your work and stay 09:39
bauzasif two products are quite identical by design, then you're less tied to one specific vendor09:40
bauzasbut this has a cost, which is that you don't leverage the opensource factor of an open design and you have to put more resources to a project to achieve your goals in time09:41
bauzasfor that specific reason, I don't think the end sum is positive in terms of revenue over investments09:42
bauzasbut alas, the ship has sailed.09:42
*** JasonF is now known as JayF13:56
mnaseruh14:15
mnasertobias-urdin: so i'd love to comment on that from our side :)14:16
mnaserfor images, this is something i fought inside openstack for ages14:16
mnaserwe should TOTALLY ship standardized container images14:16
mnaserwe should have a Dockerfile in our repos14:16
mnaserthere is SO much knowledge in the community on the best way to build openstack images, and unfortunately, politics came in the way of making progress to make openstack easier.14:16
mnaserALL of these different operators should be using the same image ideally, but the base image argument was the problem14:17
mnaseropendev tooling built on top of python:3 images, those are debian based, support "blablabla" and here we are14:18
mnaseralso from an atmosphere pov, we aren't doing anything else other than use and contribute to existing tooling14:18
mnaserits nothing but a bunch of ansible roles that deploy helm charts by openstack-helm that are all fully integrated14:18
mnaserand as to why k8s+openstack, i can gladly have that conversation for those who are skeptical, but when you see how easy it is to go through major upgrades, i see k8s as an orchestration tool and an implementation detail, operationally its so much easier since you can drive it all with `kubectl` from your cli, immutablity, so much more14:19
mnaseralso it meant for us that we can use things like https://github.com/vexxhost/magnum-cluster-api since we already have a k8s cluster sitting in place, allowing us to leverage tools from the ecosystem14:20
mnasersorry for the wall of text, but i just want to stress that well, this ain't all marketing... hell we probably dont do enough of a good job marketing atmosphere itself, but there are real operational solutions that make lives easier14:21
fungigood point, the gnu/linux distribution religious arguments have not helped matters... "you should provide standard images everyone can use!" ... "no, not like that, i meant standardized on what i wanted to be the standard!"14:38
dansmithfungi: yeah, a bit more inside the container than just our software :)14:39
dansmith(our being openstack)14:39
fungiright14:44
knikolla[m]o/14:45
knikolla[m]gmann: did you already have an etherpad to track topics for the tc+community meeting?14:46
fungidansmith: also redistributing prebuilt container images is a security concern, we've designed our entire stable branch approach around distros packaging the software and backporting security fixes to otherwise frozen versions of our dependencies. as soon as we start publishing images we become the distro and users expect us to start doing that same work (or we say don't use stable branch14:46
fungiimages, only tip of master, and upgrade continuously if you want security fixes in dependencies)14:46
dansmithfungi: yep, hugely14:46
knikolla[m]archpenstack14:48
fungiwe offload a lot of things to distributions today14:48
JayFfungi: we were talking with zigo in Ironic the other day about how debian is going to build a debian-based IPA image using debian methods and debian toolings14:58
JayFIt's pretty cool, but I think you hit the nail on the head that distributions repeat effort because you get to ... dogmatic feelings about how things should be arranged14:59
JayFand that's as much of a strength of the ecosystem as it is a weakness14:59
mnaseri still feel like we could have easily had a Dockerfile that we do not actually publish the artifact of inside repositories15:00
mnaserand it could very well be supporting of both apt and rpm distros15:00
mnaserand then you just change the FROM15:00
knikolla[m]could we still offload the building of images to distributions, but standardize on how those images load the configuration? (aka, deployment tooling to image API)15:01
fungiyes, at least if we tell users to docker build their own image then we aren't distributing copies of other software we don't maintain and taking on responsibility for bugs in it15:01
JayFmnaser: I agree with you in theory; but I am loathe to say we should take on more work with the current state of contributorship.15:01
mnasersome people were ready to drive the work15:01
JayFknikolla[m]: like I mentioned above; some distros even care down to  "how the image is constructed" not just what's inside it15:01
mnaserid gladly do all of it if that means we have to stop maintaining all the dockerfiles and repos we have15:01
JayFknikolla[m]: I find it difficult to believe we'd find a common API that all of (debian, rh, fedora, ubuntu) would all be happy with15:02
JayFthat's basically the linux equivalent of solving world peace lol15:02
knikolla[m]all configs are already in /etc/openstack, no? 15:02
knikolla[m]you provide an entrypoint for bootstrapping command and running15:02
knikolla[m](i know it's overly simplistic)15:03
fungiJayF: well, for any images that they want to ship as part of their distro they care, in the debian case it's that debian promises to its users that everything delivered as part of debian is guaranteed buildable offline from source using only components shipped in debian, so if they ship an image embedded in a package that image has to be buildable by the user from its constituent source15:03
fungipackages in debian with the versions of the necessary tools in debian15:03
JayFI'm saying in these cases, most people I've encountered care a lot about the details :D15:03
JayFfungi: that's exactly the sort of dogma which is awesome but also makes it hard to please everyone. I've heard statements from people who work places that, for instance, require them to base all docker containers on a specific version of RHEL15:04
JayFfungi: so I sorta view this as a mostly-out-of-openstack-scope problem just because it's not solvable unless long-held positions are moved15:04
knikolla[m]but it has become a sort of openstack problem15:05
fungiyes, for similar reasons, red hat won't cover those systems under a support contract if they aren't build from red hat's own packages15:05
fungiit's ultimately users though. some users of debian choose debian because it provides guarantees of being self-contained and rebuildable, some users of red hat choose red hat because it comes with the possibility of support contracts15:07
fungito discount those dogmas is to sideline the users who are looking for them, and the distributions are there to satisfy the expectations of their users15:07
JayFfungi: you basically said what I was trying to say, better15:10
JayFThere are differences at that layer for a reason; and I don't think OpenStack can overcome all those reasons with anything less than a herculean effort15:10
fungii don't necessarily think it has to stop us from providing a "standard" so long as we accept that no matter what solutions we standardize they won't be used by everyone (and that's probably okay?)15:12
knikolla[m]JayF: I have been surprised before15:12
JayFIsn't kolla that standard, then?15:12
fungiit could be, sure15:12
JayFIf someone said "I want OpenStack in containers" I'd point at kolla and k-a and tell them to go have fun :D 15:12
knikolla[m]i'm not very familiar, but from what i've heard from an engineer on our team, those images are too dependent on kolla for orchestrating them15:13
fungiand if they say "no not like that" then you can remind them that all the pieces are there and they can build their own theme park instead15:13
knikolla[m]Not as easy as a docker run15:13
fungiyeah, that comes back to "no standard satisfies everyone"15:13
knikolla[m]But if we use that as a starting point, what would be the next step?15:14
fungialienating all the other deployment projects because we picked a standard that wasn't the standard they wanted ;)15:14
bauzasdon't get me on that xkcd15:15
knikolla[m]we still do treat some drivers as reference implementations for things though, it's not like even in that case we're saying everything is equal15:16
clarkbfungi: I think that is a key point. You accept you won't meet all needs but if you can do ~80% then it ma be worthwhile. I'm a huge fan of how zuul has approached this for example15:16
bauzasthere are no golden drivers, there are tested drivers and untested drivers, that's it15:17
clarkbzuul like openstack is a largely python project. Images are built on the 'python' images15:17
clarkbdone15:17
knikolla[m]I think it was great when we relied on vendors to do the heavy lifting of providing sanity in a world of 100% customization, but we're not in that world anymore. 15:17
JayFI think a huge number of deployers still live in that world15:17
bauzasand we could have a single standard that would work with different distros, provided those 'distro drivers' would be testable15:17
bauzasbut shit, stepped into that xkcd with the left foot15:18
fungithere's a similar debate raging in the python packaging community right now. they ran a survey and the result was users said they wanted packaging to settle on a single set of standardized tool choices. unfortunately, as soon as they start trying to "pick winners" the users throw up their hands and say, no i meant standardize on the tools *i* like15:18
JayFLike it all boils down to: do we have the hands to make this work? Is it more valuable than other stuff those folks could be doing? I resolve both of those questions to a very-likely-no15:18
bauzasfungi: hah, that python packaging tool story is a mess15:18
bauzasI'm glad we stick with our old g'tools15:19
knikolla[m]I think in both cases you have a similar number of upset people15:19
fungi"when i said standardize, i meant replace pip with conda because conda solves all of my problems and also i've never used pip"15:19
knikolla[m]it's just that it's much easier to stay silent because of the status quo15:19
knikolla[m]if you've already put up with something for years, meh15:20
bauzasJayF: well, if we number the headcounts of every single existing tool, that doesn't seem a problem15:20
JayFbauzas: if you have a crowbar that can successfully leverage those headcounts into upstream work, can you tell me where you bought it so I can get two or seventeen? :D 15:20
fungithere's a bit of a zero-sum-game fallacy at work in the push for standardizing on fewer tools. in the conda example, users think that getting rid of pip would solve all their problems because then people would be forced to package everything for conda15:21
bauzasI wish I had15:21
JayFI struggle a lot with these questions; both the python packaging problem and the openstack-deployment-tooling question ... basically I know that I dislike most things that have been proposed, but I don't know what a better solution is15:21
JayFand in that situation, I place a strong bias on maintaining status quo until a better solution comes about15:22
fungiwe can standardize officially by blessing a specific way of distributing openstack, but it's not going to stop people from continuing to spend their time building alternatives if that's what they want to do, and it's definitely not going to force them to spend that time improving the standardized solution15:22
knikolla[m]there are places like https://scs.community/about/ that are trying to standardize on an open source reference implementation of openstack/k8s/and all the glue you'd ever need15:24
knikolla[m]perhaps instead of us being the blesser of a standard, we can do a better job of advertising these starter packs15:24
knikolla[m]reference deployment architecture*15:25
knikolla[m]"The standards are a collection of upstream standards enhanced with specific choices that are not specified upstream."15:27
fungiyes, i attend most of their weekly scs-tech meetings (except this week it conflicted with openinfra.live because it fell between the usa and eu time changes). they're quite active and open to participation from anyone who wants to help too15:33
fungiand they're very accommodating of my terrible (or near complete lack of) proficiency with german language too15:34
knikolla[m]I co-managed the Sovereign Cloud devroom during FOSDEM with them, last month. Very interesting work. 15:34
knikolla[m]Talks are recorded if someone is interested. 15:34
fungithey also did some data sovereignty, open operations and related talks at the last berlin summit15:35
fungirecordings should be available on the openinfra youtube channel15:36
clarkbone good reason to host things like a Dockerfile within the repos themselves is that as an end user I don't need to spend a lot of extra time vetting the random docker files someone else may have written15:37
clarkbI don't think there is anyhting wrong with third parties doing this work. But it does create user overhead15:37
clarkbso ya it won't solve everyone's problem but for the people without strong opinions it can create a streamlined onboarding process15:38
clarkbeveryone else probably knows enough to figure out theirvetting requirements15:38
knikolla[m]clarkb: i'm having flashbacks. I think I spent the better part of a week building a horizon container with some very specific customizations that required external plugins). not fun. 15:40
knikolla[m]yeah, i don't want to prevent third parties doing this work in their chosen way (ecosystem), but I really want to enable people to get started quickly 15:42
fungiwhich arguably is what the loci repo provides, as long as you agree with all of the choices made in it15:57
clarkbfungi: not really because to an outsider looking in that is just another third party they have to vet15:58
fungiwell, yeah, but those dockerfiles could in theory be moved into the individual service/tool repos15:59
clarkbIf someone knows they are going to run a service like nova they know they need to give nova root (at least with libvirt that is roughly equivalent) so they trust nova. That means they will trust a nova hosted dockerfile15:59
knikolla[m]clarbk: what other projects require root besides nova? 16:00
clarkbknikolla[m]: I think neutron. Maybe cinder depending on its lvm/iscsi work16:01
bauzasask privsep16:01
bauzashttps://codesearch.openstack.org/?q=import%20privsep&i=nope&literal=nope&files=&excludeFiles=&repos=16:02
clarkband there are other types of trust as well. More just trying to give an example of how you can't avoid that trust relationship but you can with other third parties16:03
knikolla[m]nova, cinder, neutron, as per codesearch. thanks. 16:03
clarkbwhich to me leads to value in hosting tools like this within the codebase even if it doesn't solve problems universally16:03
clarkbfungi: I think loci still tries to boil the ocean and support all the things?16:04
clarkbfungi: that may be a good starting point if it can be trimmed down though16:04
knikolla[m]perhaps the dockerfile in the individual repos can build on the loci image to provide defaults? 16:04
knikolla[m]this way you solve the trust and reuse existing tools16:05
bauzashttps://codesearch.openstack.org/?q=from%20oslo_privsep&i=nope&literal=nope&files=&excludeFiles=&repos= actually is better :)16:05
knikolla[m]bauzas: ah, that is a lot more. 16:05
gmannknikolla[m]: no, have not created leader interaction etherpad17:06
knikolla[m]gmann: ok, we can use the same TC PTG etherpad. 19:07
gmannknikolla[m]: we used separate as it can fill up with more text in main etherpad but up to you if you want to use same etherpad19:09
gmanni think both way can work19:09
knikolla[m]as the community meeting falls under the same os-tc track, and we can only link one etherpad using the ptgbot, i'm leaning more towards a unified etherpad19:10
knikolla[m]i'm also leaning towards picking the 2-3 items from the tc planning etherpad that would most concern the community for the monday meeting19:11
fungian option is to have a breakout pad and link it in the main pad19:11
fungisome teams do that for every discussion topic (master brainstorming pad with topic-specific pads hyperlinked from the main pad)19:12
fungiliteral urls in an etherpad automatically become clickable links19:12
gmannknikolla[m]: what all topics? because TC+leader interaction discussion goal is "to interact with them and listen to their issues", any interested topics for wider community can be discussed in normal Thursday/Friday slots which are open for communty too19:14
gmannI am afraid adding specific topic to interaction sessions make it more a normal TC slot then interaction sessions19:14
gmannI mean specific topic from current proposals of TC discussion19:14
gmannfungi: yeah, that is what I did in past PTGs/ linking interaction etherpad from main etherpad19:15
knikolla[m]gmann: that is good feedback. i was just going through the etherpad for last ptgs meeting 19:16
knikolla[m]and see that is much more unstructured that i remembered19:16
gmannknikolla[m]: this is last PTG interaction etherpad https://etherpad.opendev.org/p/tc-leaders-interaction-2023-119:16
knikolla[m]gmann: thanks, was just going through it. 19:17
gmann+119:17
knikolla[m]tc-members: last chance to add items to the PTG, please take a look at the etherpad and act now if you see something missing https://etherpad.opendev.org/p/tc-2023-2-ptg19:31
gmannI have pinged k8s steering committee members in case they want to join us in this PTG too. hoping its is not late notice to them. but if they respond I will add the preferred time in etehrpad19:37
knikolla[m]#thanks gmann for single-handedly having already thought out the PTG agenda items and continuing to do an amazing job. 19:51
opendevstatusknikolla[m]: Added your thanks to Thanks page (https://wiki.openstack.org/wiki/Thanks)19:51
gmann:)20:29
*** promethe- is now known as prometheanfire23:08

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!