Tuesday, 2018-08-28

*** spredzy has quit IRC03:58
*** spredzy has joined #softwarefactory03:59
*** jpena|off is now known as jpena07:49
*** jpena is now known as jpena|lunch11:28
rcarrillocruzheya folks11:38
rcarrillocruzcan i get +3 on https://softwarefactory-project.io/r/#/c/13520/1/resources/tenant-ansible.yaml11:38
rcarrillocruztenant change11:38
rcarrillocruzadd and remove few untrusted projects11:39
rcarrillocruztristanC ^ if you are around11:39
tristanCrcarrillocruz: yep, done11:51
rcarrillocruzthx mate11:51
*** jpena|lunch is now known as jpena12:28
rcarrillocruztristanC: can you pls review https://softwarefactory-project.io/r/#/c/13473/ ? linters jobs fail if the repo in question has rst, as doc8 is not present in nodes by default13:31
*** matburt has joined #softwarefactory13:39
* gundalow waves to matburt 13:39
gundalowtristanC: rcarrillocruz So following on from our discussions of single/multiple-tenants for Ansible, matburt (Ansible AWX/Tower) is interested in using Zuul for testing and gating13:43
dmsimardmatburt: ohai13:44
matburtdmsimard: greetings!13:44
matburtlooking to set up/use zuul for ansible awx/runner... I was pointed in t his direction13:45
matburtI've tried setting up my own zuul system but it's.... not going well, and I'm pretty green on zuul configuration13:46
tristanCmatburt: Hey, welcome!13:46
matburtthanks!13:46
matburtLooks like I should read through some documentation on the site to see about setting up a configuration for us13:48
tristanCmatburt: sure, i guess we could add https://github.com/ansible/awx to the current 'ansible' tenant, this would be a change on this file: https://softwarefactory-project.io/cgit/config/tree/resources/tenant-ansible.yaml#n4013:48
matburtyep and https://github.com/ansible/ansible-runner13:48
matburtshould I go through the process of submitting a patch to get that in?13:49
gundalowmatburt: If you wanted to do that, the process is explained at the end of https://github.com/ansible/community/blob/master/group-network/roles_development_process.rst#adding-enabling-zuul13:49
matburtexcellent!13:49
gundalow(ie what to checkout and how to raise a PR)13:49
rcarrillocruztristanC: i really think we should have a separate tenant13:49
gundalowrcarrillocruz: I was about to see if you were around :)13:49
rcarrillocruzi'm on a call13:49
gundalowI'm leaning towards separate tenants as well13:49
gundalowah, OK13:49
tristanCrcarrillocruz: yes, but the story hasn't been groomed or planned yet...13:50
matburtI'm okay with that also, if we want to set up an `ansible-awx` tenant I can stick awx and runner in there13:51
rcarrillocruzwell, you just said to add their repo on our tenant13:51
tristanCi don't think it's an issue to add those 2 new project on the existing tenant until ansible-network is resplit13:51
rcarrillocruzi'm fine for playing around13:51
rcarrillocruzbut don't want to give the impression it will be the way going forward13:51
rcarrillocruzok13:51
gundalowmatburt: You happy to use the existing ansible tenant, though aware that we will most likely split it out into per use-case tenants later on?13:52
rcarrillocruzin the end , my understanding we didn't agree on resplitting tenants, we said to rename current to ansible-network13:52
rcarrillocruzand consolidate two trusted projects into one13:52
rcarrillocruznow it's other thing, if other teams come aboard13:52
rcarrillocruzdifferent reqs13:52
rcarrillocruzdifferent jobs13:52
rcarrillocruzdifferent gating strategies13:52
rcarrillocruzetc13:52
rcarrillocruzdifferent secrets13:52
rcarrillocruzetc13:52
matburtit's neither here no there for me... what's the logic behind splitting it up?13:53
matburton the ansible side we're all the same group?13:53
tristanCmatburt: it's because the ansible-network want full control on the base job and the check/gate pipeline13:54
rcarrillocruzmatburt: zuul has a concept of inheritance jobs13:54
matburtI can understand the control of the base job... but the pipeline jobs are just dictated by the projects themselves?13:54
rcarrillocruzwe have a very specific needs, like most of our jobs will have a controller13:54
rcarrillocruzand an appliance13:55
rcarrillocruzeach tenant will be able to have their own pipelines13:55
rcarrillocruzif you want a gate one13:55
rcarrillocruzor not13:55
matburtgotcha13:55
matburtwell, I'd definitely like to move forward in a stable manner... so if yall are thinking about splitting then we should probably just live in our own tenant13:55
rcarrillocruzalso, the tenants hold secrets13:55
rcarrillocruzlike13:56
rcarrillocruzmy team has its own aws account13:56
rcarrillocruzcore has their, plus azure, etc13:56
matburtI thought all of us on the ansible team shared an aws/gce/azure account?13:56
tristanCrcarrillocruz: you can lock job using secret to specific project too13:56
rcarrillocruznot really13:56
rcarrillocruzwe have our own network aws account, matt didn't want to reuse core CI one13:57
rcarrillocruzwhich is ok, different accounts for different CIs i guess13:57
matburtso could it end up being core + awx + runner and then networking has their own separate tenant or am I misunderstanding?13:57
tristanCmatburt: yes, that's the current plan, however it's not in action yet13:58
rcarrillocruzit depends on what yo utalk with matt, i haven't seen him involved in zuul , not sure what his plans are13:58
rcarrillocruzbut we will have our own tenant yes13:58
tristanCi mean, on sf zuul side13:58
matburtSo it sounds like we put awx, runner in the current tenant and at some point in the future network splits out?13:59
tristanCmatburt: yes, it seems like there won't be much difference after the split, it will be like a copy of the ansible/zuul-config files back to ansible-network/zuul-config14:01
*** nijaba has quit IRC14:01
matburtokay excellent... I'll work on getting this submitted. I appreciate that help/clarity :)14:01
*** pabelanger has joined #softwarefactory14:08
matburtHow is the nodepool configured?14:14
tristanCmatburt: you can find it's configuration in https://softwarefactory-project.io/cgit/config/tree/nodepool14:14
tristanCmatburt: there are runC slave that can run fast job on fedora or centos system14:14
tristanCmatburt: and there are openstack instances, you can see the list of available labels here:14:15
tristanChttps://ansible.softwarefactory-project.io/zuul/labels.html14:15
rcarrillocruztristanC: are you kosher with https://softwarefactory-project.io/r/#/c/13473/1/roles/linters/tasks/lint_doc8.yaml14:17
matburtgotcha... so if we need some dependencies, should we set up our own images/configuration?14:17
matburtor make that part of this test pre?14:17
rcarrillocruzmatburt: depends on your tests14:18
rcarrillocruzlike14:18
rcarrillocruzwhat targets platforms you want to test on14:18
rcarrillocruzthe nodepool labels are really base images14:18
rcarrillocruzfedora14:18
rcarrillocruzcentos14:18
rcarrillocruzetc14:18
tristanCmatburt: yes, custom slaves can be created, either using disk-image-builder, or using the runC customize playbook here: https://softwarefactory-project.io/cgit/config/tree/nodepool/runC/_linters-packages.yaml14:18
rcarrillocruzif you want to test on something that is not there, then another image should be built with nodepool, there are ways to test that14:18
pabelangerI wouldn't create a per image per project, we've written some tooling upstream to make it easy for projects to install missing things via bindep14:18
tristanCrcarrillocruz: i commented on the review, i think you should remove doc8 from the linters list14:18
rcarrillocruz s/test/build/that14:18
pabelangerbaking in things into base images just creates more work in the long run, having them minimal and installing dependencies at runtime works much better between multiple projects and usually add minimal time to jobs14:19
matburtstarting with runner... it's deps are pretty light, but AWX is going to be a little different14:19
rcarrillocruzusually you layer on top by installing software on base images14:20
rcarrillocruzunless you really need to test on a base OS that is not available in nodepool14:20
pabelangermatburt: https://docs.openstack.org/infra/bindep/readme.html for more information14:20
pabelangeryou'd create a .bindep.txt file and it will list all the dependencies needed14:20
pabelangerfor OS packages14:20
rcarrillocruztristanC: thing is if the linters job test rst, and i have rst, i would like it ot test rst. The issue is that doc8 is not available in the linters node14:21
tristanCrcarrillocruz: yes, that's because the job is missing install task, but to be consistent, you should then add the install task to all linters14:21
pabelangerrcarrillocruz: I'd much rather seen the linters job use tox, then we can setup requirements files for each env, then backing that into jobs14:21
rcarrillocruzi'm ok with that14:22
pabelangerthen you can do tox -edocs locally, and it also works14:22
rcarrillocruzpabelanger: sure, but for that we need to conslidate having a tox.ini on our roles, i'm all over it14:22
matburtawx installation may rely on deploying into docker and running smoke tests... is that something we can do?14:22
rcarrillocruzi just need a fix for what we have now14:22
tristanCrcarrillocruz: please submit change to the upstream review: https://review.openstack.org/53068214:22
pabelangerrcarrillocruz: +1, we also use cookie-cutter to copypasta the tox.ini file across multiple repos14:23
rcarrillocruzmatburt: if the awx are docker based, then they can run just fine on a VM, e.g. fedora14:23
matburteg... `docker run` is the  tooling there?14:23
pabelangermatburt: yup, you can. I've been doing a POC with molecule and docker in zuul, works well. Just a little slow because of nested things14:23
matburtexcellent14:24
rcarrillocruzmatburt: yeah, i did a POC for house exactly that14:24
rcarrillocruzan ansile role with had a tox, which used molecule, which used docker14:24
rcarrillocruzlet me link you14:24
rcarrillocruzhttps://github.com/ansible-network/chouse-test/pull/514:24
rcarrillocruzthat repo we basicly copied over a community role14:25
rcarrillocruzand wrote a basic job14:25
rcarrillocruzthat just called tox14:25
rcarrillocruzwith installed molecule14:25
rcarrillocruzwhich called docker14:25
rcarrillocruzdocker in the VM was fine14:25
matburtnice14:25
matburtI'll dive in on this this afternoon, yall  bear with me while I figure out how to do things14:26
rcarrillocruzdoh15:40
rcarrillocruzis there an RDO issues ongoing?15:40
rcarrillocruzcant jump on rh irc15:40
rcarrillocruztristanC: ?15:40
pabelangerrcarrillocruz: I can look, what are you seeing15:41
rcarrillocruzhttps://ansible.softwarefactory-project.io/zuul/status.html15:41
rcarrillocruz13min, all queued up15:41
rcarrillocruzdib-fedora-2715:41
rcarrillocruzanything suspicious?15:54
rcarrillocruz26min and counting15:54
rcarrillocruzsigh, can't jump on my bouncer to fire up vpn for RH server to ask rhos-ops15:54
pabelangerrcarrillocruz: I think tenant is under capacity15:56
pabelangerso a backlog15:56
pabelangerrcarrillocruz: yah15:57
pabelangernodepool is a capacity15:58
pabelangeropenstack.exceptions.HttpException: HttpException: 403: Client Error for url: https://phx2.cloud.rdoproject.org:13774/v2.1/servers, {"forbidden": {"message": "Quota exceeded for ram: Requested 8192, but already used 393216 of 396000 ram", "code": 403}}15:58
rcarrillocruzWeee15:58
*** jpena is now known as jpena|off16:11
jruzickajpena, btw I noticed you have dlrn.tests as opposed to my projes that have tests alongside the module directory... I was wondering if that has any advantages?16:11
tristanCrcarrillocruz: gundalow: pabelanger: how does this story sounds, and when can we proceed with the split: https://tree.taiga.io/project/morucci-software-factory/us/164717:18
gundalowTristanC given step 2.1 does that mean fully separate configuration for a/a and a-n/*?17:29
tristanCgundalow: i just updated the story with more detail17:29
tristanCgundalow: we can't keep a-n/z-c in both project during the copy of base job and pipeline, otherwise this will cause conflict17:30
tristanCin both tenants*17:30
tristanCso i think it's easier if we do a clear-cut and move everything in one-shot17:30
tristanCanother solution is to move a-n/z-c content to a/z-c during the split, like that we can re-work a-n/z-c independently17:31
gundalowtristanC: would need to confirm though I'm. Fairly sure we can live with a bit of (announced) downtime as long as its not Monday or Tuesday (alternate Tuesdays are our release day for a-n)17:31
tristanCit should go fairly quickly, please let us know when is the best time to this17:32
tristanCto do this*17:33
pabelangerif zuul needs to be stopped / started then outage affects everybody17:34
pabelangerso, think best to see what works for SF and keep to min17:34
pabelangerbut just announce the downtime17:34
pabelangergoing to get harder as we onboard more people / projects17:35
tristanCpabelanger: there shouldn't be a zuul restart, or am i missing something?17:35
pabelangertristanC: I want to say, I ran into some zuul issues with zuul not unloading a project from memory before17:36
pabelangerwhen doing windmill testing17:36
pabelangernot something we've really done upstream yet17:36
pabelangeronly way to fix was to stop / start zuul17:36
pabelangerbut maybe safer just to say zuul may stop / start during window17:37
tristanCpabelanger: i never heard of that behavior, is there a bug report with more detail?17:37
pabelangerif it doesn't creat17:37
pabelangergreat*17:37
pabelangertristanC: no bug report17:37
pabelangerwould need to look for discussion again17:37
tristanCwe do add and remove project in zuul config during sf-ci, and it doesn't seems to be an issue17:37
pabelangerk17:37
tristanCsimilarly, when we moved ansible-network tenant to ansible, zuul was not restarted17:38
pabelangergreat, just saying it might be needed, if not awesome. Better to set expectations, just in case17:40
gundalow+1 to setting expectation17:44
gundalowtristanC: regarding https://tree.taiga.io/project/morucci-software-factory/us/1647 a) Lets go for downtime and simple step 2.1 b) Could you please add a "we will end up with" section at the end stating that the two GH orgs will be totally separate (no shared config) c) based on matburt's requirements, will that go into gh/ansible tenant, or it's own tenant?18:08
gundalowmattclay: no action just FYI ^18:08
*** jpena|off has quit IRC18:33
*** jpena|off has joined #softwarefactory18:33
rcarrillocruzi'm off on Friday  for a week19:08
rcarrillocruztristanC: so would be good to do so in Thu if possible, or tomorrrow ?19:08
gundalowWorks for me19:54
*** sshnaidm is now known as sshnaidm|afk22:39

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!