Friday, 2021-08-27

opendevreviewEduardo Santos proposed openstack/diskimage-builder master: Bump Ubuntu release to focal  https://review.opendev.org/c/openstack/diskimage-builder/+/80629602:28
*** ysandeep|away is now known as ysandeep05:36
opendevreviewEduardo Santos proposed openstack/diskimage-builder master: General improvements to the ubuntu-minimal docs  https://review.opendev.org/c/openstack/diskimage-builder/+/80630805:41
opendevreviewIan Wienand proposed openstack/diskimage-builder master: yum-minimal: use DNF tools on host  https://review.opendev.org/c/openstack/diskimage-builder/+/80631806:38
*** rpittau|afk is now known as rpittau07:07
opendevreviewRiccardo Pittau proposed openstack/diskimage-builder master: Fix debian-minimal security repos  https://review.opendev.org/c/openstack/diskimage-builder/+/80618807:22
opendevreviewIan Wienand proposed openstack/diskimage-builder master: yum-minimal: use DNF tools on host  https://review.opendev.org/c/openstack/diskimage-builder/+/80631807:35
*** jpena|off is now known as jpena07:38
*** mgoddard- is now known as mgoddard07:58
opendevreviewIan Wienand proposed openstack/diskimage-builder master: yum-minimal: use DNF tools on host  https://review.opendev.org/c/openstack/diskimage-builder/+/80631808:23
*** ykarel is now known as ykarel|lunch08:24
opendevreviewIan Wienand proposed openstack/diskimage-builder master: yum-minimal: use DNF tools on host  https://review.opendev.org/c/openstack/diskimage-builder/+/80631809:07
*** ysandeep is now known as ysandeep|lunch09:12
*** ysandeep|lunch is now known as ysandeep09:36
opendevreviewIan Wienand proposed openstack/diskimage-builder master: yum-minimal: use DNF tools on host  https://review.opendev.org/c/openstack/diskimage-builder/+/80631809:53
*** ykarel|lunch is now known as ykarel10:04
*** jpena is now known as jpena|lunch11:34
opendevreviewJean Paul Gatt proposed openstack/diskimage-builder master: Support setting custom repository for Redhat Satellite.  https://review.opendev.org/c/openstack/diskimage-builder/+/80639511:37
lucasagomeshi folks, if you have some time today mind taking a look at this new project proposal https://review.opendev.org/c/openstack/project-config/+/805802 ? It needs another +2. Thanks in advance!12:17
*** dviroel|out is now known as dviroel|ruck12:17
fungiconfig-core: ^12:22
fungii +2'd it yesterday, but we've been spread a bit thin on reviewers this week12:23
lucasagomesfungi, it's all good! thanks much for reviewing it12:28
*** jpena|lunch is now known as jpena12:31
*** rpittau is now known as rpittau|afk13:42
opendevreviewMerged openstack/project-config master: New project: OVN BGP Agent  https://review.opendev.org/c/openstack/project-config/+/80580213:54
opendevreviewMonty Taylor proposed openstack/diskimage-builder master: yum-minimal: use DNF tools on host  https://review.opendev.org/c/openstack/diskimage-builder/+/80631814:02
*** ysandeep is now known as ysandeep|away14:41
*** ykarel is now known as ykarel|away14:43
*** dviroel|ruck is now known as dviroel|ruck|lunch14:57
clarkblooks like it got approved15:03
opendevreviewJames E. Blair proposed opendev/system-config master: Pin base and builder images to buster  https://review.opendev.org/c/opendev/system-config/+/80642315:03
clarkbLooking at my day I expect to be around a lot more today, but probably not with consistent blocks of time to do things like the job mapping or gerrit account cleanup.15:03
clarkbWill probably try to focus on reviews and helping with bullseye stuff15:04
corvusmordred, fungi: take a look at 806423 -- the commit msg there outlines a plan if we want to do this stepwise15:04
corvusi think we should either do the plan in the commit message, or manually push the tag and try to push things through quickly15:05
corvusit's just that i'm not sure how much time folks have to devote to this.  if we can't wrap it up more or less immediately, i think 806423 is a good idea because it gets us back to a stable/maintainable state quickly and we can do the transition slowly15:06
clarkbcorvus: the plan outlined there makes sense to me. Then we can shift to bullseye on a case by case basis which limits blast radius as we go15:06
mordredcorvus: reading15:07
clarkband buster isn't eol for a while so no concern there15:07
mordredcorvus: ++15:09
fungii'm good with the plan15:09
fungiand sorry i'm intermittently responsive, hurricane shutter people still trying to finish up and there's a crew here foam-insulating the attic all day too15:10
lucasagomesinfra-root hi, the patch https://review.opendev.org/c/openstack/project-config/+/805802 is now merged (thanks!) and the gerrit groups have been created (https://review.opendev.org/admin/groups/e7b50bd1c1fcb7b67d184c0df2de5bdee06b7b03 and https://review.opendev.org/admin/groups/b47387254ae1ed75db7479f1725759e43500dced) can you please adding me to these groups ?15:10
lucasagomesso I can start adding the rest of the members, thanks!15:10
fungilucasagomes: yep, just a moment and i'll take care of it15:10
lucasagomesfungi++ thanks much!15:10
mordredcorvus: +A15:10
fungilucasagomes: done, let me know if it's not working like you expect15:13
lucasagomesfungi, thanks much! Will do15:13
clarkbfungi: I have approved https://review.opendev.org/c/opendev/system-config/+/805407 to reflect hte upgrade you did. thanks!15:28
fungioh, thanks! i should have done that, sorry i forgot15:29
fungialso i guess it's time to think about scheduling a similar lists.o.o upgrade15:30
clarkbyup, do we want to discuss that in next week's meeting?15:31
clarkbthe openstack release makes that maybe a little tricky, but overall things went well so really would only expect issues in our vhost setup?15:31
fungithat's my thinking too15:31
fungii'd be happy to do the upgrade on a saturday or something to minimize impact of the (longer in this case mainly because snapshotting will take longer) outage15:32
fungior we could try snapshotting it live instead and just live with the possibility of a jinky filesystem if we need to boot from the image15:33
clarkbdo you recall how long the snapshot took last time?15:33
funginot really, a few hours i think15:33
clarkbprobably the thing to do is propose a date and then ask openstack et al to weigh in if they have concerns with the timing. Then OpenStack can indicate the conflicts for their release if problematic15:34
fungii'll see if i can find a more precise duration for the last lists.o.o image i made15:36
clarkbfungi: I'm happy to help on a weekend too though definitely not this one. This week was exhausting :)15:39
*** jpena is now known as jpena|off15:39
fungiyeah, of course not this weekend ;)15:40
fungii don't have plans next weekend, though that's a holiday weekend in the usa i think? so you're likely to have plans15:41
clarkbI don't have concrete plans but ya it is the weekend before school starts here15:42
fungilooks like i made the last lists.o.o image on 2021-05-0415:43
fungi"5:15 PM WET" according to rackspace... what tz is that?15:44
clarkbwestern european time zone says google15:45
clarkbweird that rax would report timestamps in that timezone unelss this was a hack to get utc because they don't use utc directly?15:45
clarkbit us utc+015:45
fungiaha15:45
clarkb*it is15:45
fungiyeah, except it may also have a daylight time15:45
fungianyway, that's coarse enough for me to hopefully find it in the irc log15:45
fungiahh, right, this was in prep for esm enrollment with the exim vulnerability around that time15:48
fungiCVE-2020-2802015:50
lucasagomesfungi, FYI everything seems to be working great! Thanks much15:50
fungiyou're welcome, glad to hear it15:51
fungiclarkb: looks like i started creating the image at 17:15 utc and remarked at 21:52 utc that it had finally completed, though not sure i caught it right when it completed. i did remark at 20:00 utc that it was still being created, so that means somewhere between 2.75 and 4.5 hours15:53
fungikeeping in mind that was imaging it while the server was running, so if we do take it offline to create the image instead it might be quicker (no guarantees though)15:55
opendevreviewMerged opendev/system-config master: Test lists.kc.io on focal  https://review.opendev.org/c/opendev/system-config/+/80540716:00
opendevreviewsean mooney proposed openstack/project-config master: Add review priority label to nova deliverables  https://review.opendev.org/c/openstack/project-config/+/78752316:02
clarkbfungi: ya we can't be sure of what the limitation is there (disk iops or catching up with running changes, etc)16:05
*** dviroel|ruck|lunch is now known as dviroel|ruck16:06
clarkbif we figure 4.5 hours to snapshot then allocate 2 hours for the upgrade itself that is still doable within a day.16:07
fungiyep16:14
fungirelated, the openinfra foundation staff are interested in moving a bunch of general foundation mailing lists from the lists.openstack.org site to a new lists.openinfra.dev site, which i'm pushing out until post-upgrade because i want to make sure i can get an accurate picture of what the memory pressure increase from that will likely be16:19
fungishould be able to check the output of free, stop all mailman services, run free again, take the difference and divide by 5 to get our average per-site memory consumption, then project what it would look like to increase overall memory utilization on the server by a similar amount16:21
fungiit's not precise, but should be good enough to gage the potential impact16:22
clarkbI think it may be apache that consumes a good chunk of the memory too which we can potentially tune down16:23
opendevreviewMerged openstack/project-config master: Add review priority label to nova deliverables  https://review.opendev.org/c/openstack/project-config/+/78752316:23
fungithough i guess i can also try to infer the impact of the upgrade by performing a similar exercise pre-upgrade (now) and comparing to the same on lists.k.i since it's already upgraded16:23
fungitaking the 5x multiplier into account16:24
fungii'll try to find a quiet time this weekend to do all that quickly16:24
clarkbmight be a good idea to do that anyway16:24
clarkbsince it will indicate if 5x on focal is likely to cause problems16:24
fungiyep16:24
fungimy concern as well16:25
fungiit's mostly python-based daemons, which means individual python processes, and those aren't exactly light on memory use16:25
corvusfungi: any chance you can take the opportunity to move those lists to mm3?16:25
fungicorvus: i see the operating system upgrade as making that easier, but would probably want to do that as a separate step16:26
clarkbthe plan we had been operating under was convert to ansible (done), upgrade to focal (in progress), then figure out mm3 on the modern tooling and os16:26
fungicorvus: but yes, ultimately i do want to do that16:26
corvusAck. Just wondered if we could skip a step, sounds like no because tooling isn't ready16:27
fungihowever, i'm not sure if we have sufficient resources on the existing server to run it all side-by-side16:27
fungithat's another thing we need to dig into16:27
fungii thnik the running mm3 part shouldn't be hard. i already did a poc a few years back with distro packages though we could switch to using the semi-official container images16:28
clarkbfwiw using our system-config-run- jobs we can test the deployment of mm3 and possibly even an upgrade from mm2 to mm3 on focal like what I've done with gerrit 3.2 -> 3.316:28
clarkbthat means we can make progress on the operating system upgrade and mm3 concurrently if we have time16:29
fungiin theory once everything is moved to mm3 the memory footprint could be smaller, as we get full multi-domain support in doing so and can drop the current vhost setup16:29
fungimm3 correctly maps name@domain to a unique list, so no need to worry about collisions over the name portion (which is the primary concern with mm2's domain functionality)16:30
clarkbthe buster image switch should merge momentarily. Then we cross fingers for happy promotion17:05
mordredWoot17:06
opendevreviewMerged opendev/system-config master: Pin base and builder images to buster  https://review.opendev.org/c/opendev/system-config/+/80642317:11
clarkbbase 3.8 and uwsgi base 3.9 failed to promote17:13
clarkbnow where that gets interesting is if we've got a bullseye 3.8 builder and a buster 3.8 base17:13
clarkbI don't think that is an issue in this case17:13
clarkbhttps://zuul.opendev.org/t/openstack/build/48199b22f7d74af292fe77a93ae17787/console says we failed afte rthe promotion if I read that correctly. Someone other than myself should probably double check and confirm that17:14
clarkbthe uwsgi builds seems to be in a similar situation17:17
mordredyeah - that happened yesterday - there's issues in the cleanup17:54
fungiso what's the next step? we can just merge the fix and then follow up with the unpin?18:02
Clark[m]I may miss some context but wasn't that a fix?18:04
Clark[m]Since it sets us back to where we were?18:04
fungithe pin? i thought that was so we could merge dib fixes18:09
fungithe yum->dnf switch18:09
fungior am i crossing the streams?18:09
mordredyeah - the buster pin should open the door for us to fix dib and nodepool18:09
mordredwithout things being blocked while we work it18:10
fungiright, cool18:10
fungiClark[m]: so you're not necessarily missing context, it was a fix for the fix18:10
fungiso that we can fix it fixed and then unfix once it no longer needs fixing18:10
fungibecause that's in no way confusing18:11
PalaverI would need to push to zuul an image of an OS to run tests for a change for kolla-ansible18:51
mazzyWho can help with that?18:53
fungimazzy who can help with what? or are you also Palaver?18:53
mazzyI spoke already with a core maintainer of kolla ansible project and he redirect 18:53
mazzyYes fungi. It's me.wrong nickname 18:53
mordredfungi, Clark, corvus: I'm working on a followup patch for the buster/bullseye stuff18:55
fungimazzy: our nodepool-builder servers build operating system images using diskimage-builder, what specifically are you looking for? we may already have a representative image of the linux distribution and version on which you want to test18:55
mazzyfungi: thanks. The image I would need is of Flatcar 18:56
*** odyssey4me is now known as Guest558518:56
mazzyI'm not sure you have it already 18:56
mordredthis: https://www.kinvolk.io/flatcar-container-linux/ right?18:57
mazzymordred: correct 18:57
fungimazzy: so the first step would be making sure https://pypi.org/project/diskimage-builder has elements for building images of that18:57
mazzyI would address the stable version 18:58
mazzyFlatcar distributes several images for several platforms 18:58
fungionce it's supported by diskimage-builder, we'd add configuration for it to our nodepool-builder servers so they would start building images of it18:58
mordredgotcha. that might not be super immediately compatible with how we run vms - a decent amount of work would need to go into the zuul jobs - the base jobs are going to make a lot of assumptions about being able to ansible in to a node and do stuff. so some design work would need to be done to figure out the best way to accomplish using it18:58
mazzyI've already tested kolla against flatcar 18:59
mazzyAnd it works 18:59
mazzyI was able to spin a fleet just by changing few lines of the ansible kolla 19:00
fungiwe'd also want to get package mirrors and python wheel builders set up to reduce network overhead from installing things on the nodes19:00
mordredthat's not the issue - it's the mechanism of interaction. I'm sure it's solvable, but it'll take more design than just being able to boot one19:00
mordredfungi: that's the thing - you dont install things on those nodes19:00
mordredthis is coreos - just different19:00
fungium, you don't install things?19:00
fungihow do you install kolla on it?19:00
mordredit's immutable base os designed for running containers19:00
mordredso you can run containers19:00
mazzyExactly 19:00
mazzyI have just installed Python 19:00
mordredbut ansibling in and running pip install is not going to work19:01
mazzyAnd that's it 19:01
fungiahh, we don't test on containers, we test on virtual machines, so presumably kolla would have jobs to build flatcar linux containers on some other distro in that case19:01
mazzyPython is still possible to be installed along with pip 19:01
fungiwhen we test container things, we install the containers onto virtual machines managed by nodepool19:01
mordredmazzy: I assume one bootstraps via cloud-init like with coreos?19:02
mazzyIgnition 19:02
fungiso zuul isn't communicating directly with containers to run the jobs, it's communicating with the virtual machines where those containers are installed19:02
mordredfungi: yah - flatcar is the os for the VM19:02
fungier, how can it be immutable then?19:03
mordredbut it has a vastly diffrent operating model than what we expose19:03
mordredthe base filesystem is an immutable snapshot. when you boot it, you provide cloud-config info that tells it what containers you want it to run19:03
mazzyBecause eveyrhing runs on container and it does not have any package manager 19:03
fungiyou still need to be able to write somewhere on the node19:03
mordredyah - there's usually data volumes19:03
mazzyCorrect 19:04
mordredbut they're accessed/exposed as container volumes19:04
fungimazzy: well, anyway, i guess this discussion highlights that flatcar isn't designed like a typical linux distribution, so you'll want to get very familiar with how zuul and nodepool work before trying to design a way to integrate them19:04
mordredthe model of "shell in to the node with ssh and perform os commands to do stuff" is not the model19:04
mordredyeah - I do think it's possible19:04
mordredbut it's going to be very non-trivial19:04
mazzyWait a sec 19:05
mazzyWhat I'm trying to do is not deliver support of the base os for the containers. 19:05
mazzyI'm trying to deliver support for the os where containers will run 19:05
mordredright19:05
mordredthat's what's going to be very non-trivial to support19:06
mordredmazzy: for context, we don't use cloud metadata services for instance specialization *AT ALL* 19:06
mazzySorry do not follow you 19:06
fungithe way our ci jobs normally work is that nodepool builds an image which contains cached copies of a lot of stuff, boots virtual machines from those images, allocates them to builds for jobs when requested by zuul, then the zuul executor connects to the node(s) for the build via ssh to run the playbooks for those jobs, which generally involves installing the things the job will need into the19:06
funginode and then running some testsuite and collecting results/artifacts and reporting results, at which time the nodes are returned and garbage-collected to free cloud quota19:06
mazzyWhat does that mean?19:06
mordredso we currently don't even have a mechanism to pass any info to ignition19:06
mordredyah. what fungi said. we don't expose any interface to the cloud mechanisms that you would need to interact with to be able to boot and interact with a flatcar os image19:07
fungiwe expect the cloud provider to pass information to the virtual machine via configdrive, so it knows how to conifgure networking so that zuul will be able to connect to it19:08
fungiwe use a lightweight agent https://pypi.org/project/glean which is like a very stripped-down cloud-init replacement19:08
mordrednow - I'm 100% certain a solution could be designed. but it'll be some deep design and interaction19:08
mordredfungi: yah - so in flatcar they use Ignition instead of cloud-init - which is like a more powerful beefed _up_ cloud-init replacement ;) 19:09
mazzyOk but in my case I could bundle a flatcar image with just Python and pip installed. This is the only thing I need 19:09
mordredyou need to be able to ssh into the node19:09
mordredand you need to be able to interact with the node to do your actions via that ssh connection19:10
fungialso if we don't rebuild the image with cached copies of the openstack projects, the builds will end up fetching tons of git state on every run19:10
fungiwe generally rebuild all our operating system images daily19:10
mordredwell - I imagine the kolla container builds would happen on a differen tnode and these would use the built containers images19:11
fungito make sure their contents and cached data are as current as is manageable19:11
mordredbut we would want a variation of the base jobs that did not attempt to push the git repo contents to the flatcar node19:11
fungiif the idea is to test changes to kolla, then the kolla images would need to be built somewhere by the job, so yeah it could be a multi-node job which uses an ubuntu node to build the images and then a flatcar node gets them deployed from an instantiated registry, or there could be a build job and a flatcar test job depending on that sharing images from the buildset registry19:12
mazzyHow do you usually ssh into the image?19:12
*** sshnaidm is now known as sshnaidm|afk19:12
mazzyUsrf/pwd?19:12
fungimazzy: rsa key19:13
mordredthere's a public key19:13
fungisupplied via cloud provider metadata and installed at boot by glena19:13
fungiglean19:13
mordredfungi: it's like our opendev servers where we are just running docker-compose. except instead of building from a base os, installing docker with ansible and then putting the compose file on the node, docker would be pre-baked in and we'd pass the compose file in at instance boot time via instance metadata19:13
mazzyIirc flatcar should still support cloudinit 19:14
fungior are we installing them into the built vm images with a noepool element? i need to double-check19:14
mordredfungi: I think we moved to keys via instance metadata19:14
mordredbut you might be right :) 19:14
fungiwell, we do set a ZUUL_USER_SSH_PUBLIC_KEY in the env-vars for nodepool builders19:15
fungiso i think it's baked into the images19:15
mazzyhttps://github.com/kinvolk/coreos-cloudinit19:15
mazzyYeah seems cloudinit still supported 19:15
fungimazzy: while you were gone i think i convinced myself we just bake the zuul ssh public key into our node images anyway19:16
Clark[m]We don't use glean or cloud I it for the zuul credentials. We do use glean for our root credentials19:16
Clark[m]A better approach might be to reboot into flatcar19:16
Clark[m]That way zuul can bootstrap per usual then the job can convert itself to the target19:16
Clark[m]But I'm not sure how feasible that flip would be19:17
fungioh, could even do something like kexec maybe to save time19:17
mazzyWhich type the images you use are?19:17
mazzyAre qemu images?19:18
Clark[m]It depends on the cloud. We build a single image with diskimage-builder then convert it to raw, qcow2, and vhd for various clouds19:20
mazzyInteresting because they already do that19:20
mazzyKinvolk builds images for anything out there 19:20
mazzyhttps://stable.release.flatcar-linux.net/amd64-usr/current/19:21
Clark[m]Right but we want to build our own images. It allows us to control what goes in them and ensure they are all identical other than format across our clouds19:21
mordredyeah. booting the image is not the problem19:21
Clark[m]And unfortunately we tend to have clouds that can't boot random internet images19:21
mordredthe problem will be interacting with the image once it is booted19:21
Clark[m]Well it is part of the problem due to networking setup19:21
Clark[m]But then once that is solved you have the next issue of talking to the image19:22
mazzyWell but the point is right that. If we bake the image then we can push what we want in the image 19:22
mordredmazzy: we can - but we still can't have a _job_ pass anything to ignition19:23
Clark[m]It might be helpful to talk about what you are trying to achieve with flatcar and our CI system19:23
mazzyI mean at the moment with the flatcar builder they provide I can bake eveyrhing inside of the official image 19:23
mazzyClark[m]: adding Flatcar support to kolla 19:24
mordredright - but to make use of that you'd need the ability to have zuul job build a flatcar image containing the kolla images you need inside of it ... and then the ability to boot that flatcar image in the cloud. that's super unpossible. the next option would be to have one job build kolla images and then a second job boot a flatcar instance that you would run kolla containers inside of - but unless you can start those containers via an ansible19:25
mordredssh connection, which is not how people use flatcar, it's still going to be an issue19:25
fungiyeah, maybe part of the confusion here is that we don't boot nodes in a job, we boot nodes and then jobs ask for an available already booted node to run on19:25
mordredright19:25
fungiso the nodes are not job-specific19:25
fungithey are generic nodes booted with a representative of whatever general-purpose operating system they are meant to replicate19:26
mazzyBut we can leverage flatcar official tools to build image for zuul jobs  19:27
mazzyThose are open source 19:27
fungiyes, there could for example be diskimage-builder elements for building generic flatcar images19:27
fungiand then we could boot generic flatcar virtual machines from them which jobs could request19:27
mazzyExactly 19:28
fungiand then *magic happens* to add the things which the job will need in the running flatcar virtual machine19:28
mordredyup. that part is easy enough (it's work, but it's easy)19:28
mordredyah. that's the part that needs a story19:28
mordredmazzy: so - once we have booted a flatcar instance for your job, how do you expect to run kolla containers in it?19:28
mazzymordred: what do you mean?19:29
fungihow do you install kolla once flatcar is booted19:29
mazzyRunning ansible playbooks 19:29
fungigot it19:29
mazzyThis is what I've already done and fully tested in my servers today 19:30
mordredok. so you can ssh from outside the flatcar instance and do all the things via ansible?19:30
mazzyExactly 19:30
fungiso zuul will ssh into a running flatcar virtual machine, then run ansible playbooks to install kolla19:30
mazzyCorrect 19:30
mordredok - sweet. that's not nearly as much of a mismatch than we were worried about19:30
mordredin that case, then, adding dib support for building a flatcar image is going to be the main work19:31
fungiso in that sense it should work like a generic gnu/linux distribution on a virtual machine19:31
mazzyExactly. 19:31
mazzyBe used flatcar image must come with Python installed 19:31
mazzyWhich of course is not the case 19:31
mazzyAnd we need to bake with Python 19:32
fungiyeah, we install python into all our virtual machine images when building them19:32
mazzyOr ask to ansible to install it 19:32
mordredheh. ... https://kinvolk.io/docs/flatcar-container-linux/latest/reference/developer-guides/sdk-modifying-flatcar/#using-cork <--19:32
mordredthat works with chroots already19:32
mordredso adding dib support for flatcar stuff might not be horribly difficult19:32
mazzyCork -yeah their tool 19:32
mazzyCork and mantle to be exact 19:33
mazzyWhich is your opinion on Python? When should be installed? 19:33
fungimaking sure glean works instead of cloud-init may be useful, for consistency with our other images19:33
mordredyeah19:34
mordredmazzy: it'll need to be in the image - first thing we do with one is ansible in - so without python that will not work well :) 19:34
fungimazzy: we install a "default" python but allow jobs to install other versions of python after starting if they need different interpreter versions19:34
Clark[m]fungi: not just consistency but cloud init may not work with certain vlouds19:34
mazzyMake sense 19:35
mazzySo we need to make it with Python 19:35
mordred"Flatcar Container Linux is based on ChromiumOS, which is based on Gentoo." <-- there's already gentoo support in dib - so ultimately it's possible it might be fairly straightforward to add support19:35
fungibut yeah, we bake some python version into all our virtual machine images just to make things go more smoothly with ansible19:36
mazzyOk cool cool. Seems we are on the same page. Then what's next? 19:37
mazzyShould I create a proposal anywhere? 19:37
fungi(generally whatever the default python3 interpreter is for the distro, but if the distro doesn't have a default i guess we pick one)19:37
mordredmazzy: look at https://opendev.org/openstack/diskimage-builder 19:37
mordredthat's what nodepool uses to build images for zuul19:37
mordredit'll need flatcar support19:37
fungiyeah, that's essentially our entry point for creating images19:37
mazzyYes. So I can wire changes in there. Cool cool 19:38
funginodepool is going to call diskimage-builder specifying some set of elements, for example debian-minimal19:38
mazzyIn our case I would like to start simple. Address only the stable version 19:38
mazzyWhich Python do you usually use? 19:39
mazzyI was used to run PyPy on flatcar because easy to bundle it and install it 19:39
fungiwe also have functional test jobs which run on proposed diskimage-builder changes to make sure the built image can be booted under devstack and reached by network, so should be a fairly good indicator to reviewers whether it's working19:40
fungimazzy: cpython (generally a supported 3.x)19:40
mazzyPerfect. Oh Wait sec 19:40
mazzyImportant point... Networking 19:40
fungiwould probably make sense to use cpython 3.9.whatevers-latest19:40
mazzyThere is dhcp around?19:41
mordredon some clouds19:41
fungiit depends on the cloud19:41
mordredhttps://opendev.org/openstack/project-config/src/branch/master/nodepool/nb03.opendev.org.yaml#L86-L96 <-- this is the list of elements we include in our dib builds19:41
fungithe configdrive metadata will provide overrides if defaulting to dhcp isn't an option19:41
mordredsimple-init is one of the ones you'll really need to focus on - it's what installs glean which is what deals with networking19:41
mordredit already supports gentoo - so it's likely not that bad to handle19:42
mazzyGotcha 19:42
mazzyOk should be all clear for the moment 19:46
mazzyI will get back definitely in case of some blockers 19:46
fungiwe'll be around!19:47
fungiwe also have a mailing list, service-discuss@lists.opendev.org19:47
mazzyThanks a lot 19:48
mazzyNoted!19:48
fungilance is asking whether we still have any leaks in the osuosl environment... i guess i can just look for images or server instances with a non-current date in our account there? anything else need checking?19:57
Clark[m]That was what I did last time. Also volumes as it is boot from volume iirc19:58
fungii only see two of each of our image types in openstack image list, and just one server instance according to openstack server list19:59
fungii'll check volume list19:59
fungivolume list is empty... is that accurate?19:59
fungimaybe we're not doing bfv there?20:00
fungii need to pop out to run an errand but will be back in a few and can reply to his message20:00
Clark[m]We might not be doing bfv then20:02
Clark[m]Sounds like it is nice and clean20:02
opendevreviewMonty Taylor proposed opendev/system-config master: Produce both buster and bullseye container images  https://review.opendev.org/c/opendev/system-config/+/80644820:20
mordredcorvus, Clark, fungi : ^^ I *think* that's the next step?20:21
clarkbmordred: looking at my only questions is how does that affect existing users? it shouldn't do muhc because they exisitng names remain they will just stop being updated?20:23
clarkboh actually we keep tagging buster with the old tags20:24
clarkbwhich means they will get updated until buster eols or we switch the tag to bullseye20:24
clarkbI do think we should aim to delete buster, but ya this looks like a good next step20:28
mordredyeah - I figure we'll swap the latest tag to buster at some point - and then stop building busters - but no rush on that20:29
Ramereth<- Lance from OSUOSL BTW in case you want to ping me here20:41
clarkbHello. As mentioned above it seems like things are pretty clean right now20:49
clarkbI'll let fungi write up a proper response when he returns20:50
fungiRamereth: d'oh! yep, all good, thanks for checking in!21:00
fungii only see the expected images and server instances in openstack image and server list output21:00
fungii'll follow up by e-mail too21:00
Ramereth\o/21:02
fungiand sent21:05
fungione thing we need to look into in our end is that nodepool thinks we have an unlocked node there in a deleting state from 128 days ago (doesn't show up in openstack server list though so i think it's stale info in our zookeeper or something)21:06
opendevreviewClark Boylan proposed opendev/system-config master: Produce both buster and bullseye container images  https://review.opendev.org/c/opendev/system-config/+/80644822:51
clarkbmordred: ^ a minor update that should fix the job failures. Turns out ARG is weird22:51
fungii prefer to use ARRRGH 22:54
clarkbaye matey23:00
clarkbnext issue is I think we reverted too much?23:27
clarkbya we're doing eavesdrop and uwsgi as if they are on bullseye23:28
clarkbI think we can do that in followups and more important is we have the imgaes to start so I'll fix the bindep files23:28
opendevreviewClark Boylan proposed opendev/system-config master: Produce both buster and bullseye container images  https://review.opendev.org/c/opendev/system-config/+/80644823:31
*** dviroel|ruck is now known as dviroel|out23:44

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!