21:01:36 <strigazi> #startmeeting containers
21:01:37 <openstack> Meeting started Tue Aug  6 21:01:36 2019 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:40 <openstack> The meeting name has been set to 'containers'
21:01:48 <strigazi> #topic Roll Call
21:01:56 <strigazi> o/
21:01:57 <jakeyip> o/
21:03:11 <strigazi> Hello jakeyip
21:03:21 <strigazi> #topic Announcements
21:03:46 <jakeyip> hi strigazi. wondering if flwang is around?
21:04:24 <strigazi> After discussion with flwang , we will clean up the review list abandoning all patches olders than 30 days. of course contributors can reopen the them
21:04:46 <jakeyip> +1
21:04:57 <strigazi> jakeyip: flwang is attenging a conference and can not join
21:05:26 <strigazi> Since we are the two of us, let's make this an open discussion
21:05:30 <strigazi> #topic Open Discussion
21:06:03 <strigazi> Is there something specific you would like to discuss jakeyip ?
21:06:48 <strigazi> Any patches or something you need to be fixed? Any issues with your deployment?
21:08:37 <jakeyip> thanks for putting a note on the quota patch. https://review.opendev.org/#/c/673782/ . since you think it's ok I will go on updating tests and such
21:09:01 <jakeyip> I'm currently working on a few things a.t.m. am interested in ceph's deployment of manila + magnum
21:09:20 <strigazi> would you like also to pick https://review.opendev.org/#/c/657435/ ?
21:10:12 <strigazi> jakeyip: https://gitlab.cern.ch/strigazi/csi-plugins
21:10:39 <jakeyip> strigazi: sure I'll have to read the etherpad later to get more context
21:10:55 <strigazi> jakeyip: the above soon will be update to csi 1.0 and csi-manila, but these work
21:10:57 <jakeyip> are you doing nfs / cephfs to users?
21:11:10 <strigazi> not nfs, only cephfs
21:12:23 <jakeyip> and your cluster is nautilus?
21:12:30 <tbarron> strigazi: note that there is a diff between the manila-provisioner and the newer manila-csi provisioner
21:12:49 <strigazi> tbarron: yeap, unfortunatelly I know :)
21:12:52 <jakeyip> hi tbarron!
21:13:49 <jakeyip> I see there's a cvmfs csi too. that might be interesting to our HPC guys
21:13:49 <tbarron> hi, sorry to interrupt
21:14:45 <strigazi> unfortunatelly because we will need to change a bit. But we are keen to deploy when ready
21:15:28 <tbarron> ack
21:15:58 <strigazi> jakeyip: only limitation to these two is that they work with up to k8s 1.13.x
21:16:21 <strigazi> manila-csi will implement csi 1.0, right tbarron ?
21:16:36 <tbarron> strigazi: up
21:16:40 <tbarron> yes
21:16:51 <strigazi> and cvmfs-csi will have to be adapted accordingly
21:16:58 <jakeyip> I see. I was testing with 1.13.7 so it's ok.
21:17:06 <tbarron> i've only tested with 1.15.0 but 1.13.0+ should be good
21:17:21 <jakeyip> what are you running in prod strigazi tbarron ?
21:17:30 <strigazi> we do
21:17:55 <tbarron> and as jakeyip and I discussed manila-csi requires a partner protocol plugin so for cephfs native that is
21:18:09 <strigazi> tbarron: any pointer to the manifests you used for manila-csi?
21:18:11 <tbarron> the ceph-csi plugin (just for node )
21:18:20 <tbarron> to actually do the mounts
21:18:26 <tbarron> and it needs nautilus
21:18:54 <tbarron> strigazi: I'll share them in this channel later, they are right now on a private file server
21:19:32 <tbarron> strigazi: i've been testing with the nfs gateway and nfs partner plugin for ceph b/c that's my employer's immediate interest
21:19:44 <strigazi> tbarron: ok, thanks. ping me if it is not trouble
21:19:58 <tbarron> jakeyip: i'm not in production, am doing r&d as it were
21:20:15 <tbarron> strigazi: of course, will get them public and share
21:20:28 <tbarron> strigazi: not a secret, just a convenience atm
21:20:29 <jakeyip> ok. thanks for all your input!
21:21:01 <strigazi> tbarron: no problem, got it
21:25:13 <strigazi> jakeyip: Do you want to discuss anything else? Shall we wrap otherwise?
21:26:42 <jakeyip> I am ok. just want to say thank you for the work on reviews recently. that and abandoning old reviews will make it easier for us to help out with reviewing
21:27:08 <flwang> sorry, i'm late
21:27:31 <brtknr> o/ hey all
21:27:41 <jakeyip> o/
21:27:49 <strigazi> o/
21:27:58 <flwang> strigazi: hey, i miss you
21:28:27 <strigazi> :)
21:28:49 <flwang> strigazi: did you see my question in the os patching patch?
21:29:34 <strigazi> in which one? os upgrade?
21:29:36 <flwang> now i'm stuck on the issue that i'm trying to create a temp service to do uncordon after upgrade/reboot, but after fedora atomic reboot, all the service files under /etc/systemd/system will be deleted
21:29:39 <flwang> any idea?
21:29:57 <flwang> i even tried to use ostree commit to commit current file system, but no help
21:30:03 <flwang> os upgrade
21:31:16 <strigazi> nothing on top of my head, I'll have a look
21:32:30 <flwang> strigazi: thank you
21:32:43 <flwang> strigazi: and recently, i'm working the fedora atomic 29
21:33:03 <strigazi> flwang:  You need two things for f29
21:33:05 <flwang> i just found we have to enable the hwrng for nova
21:33:24 <strigazi> one is the patch I did with cni (for calico maybe not an issue)
21:33:31 <strigazi> the other is what you said
21:33:38 <jakeyip> ah yes we have that too :)
21:33:38 <strigazi> hwrng
21:33:51 <strigazi> we have this in all our flavors now and all images
21:34:07 <flwang> strigazi: hwrng in nova.conf and nova flavors, and a property on the image
21:34:11 <jakeyip> strigazi: do you have any rate limits?
21:34:15 <strigazi> not in nova.cin
21:34:18 <strigazi> not in nova.conf
21:34:25 <strigazi> not rate limits
21:34:36 <flwang> strigazi: you mean don't need it for nova.conf?
21:34:43 <strigazi> only one property in the flavor and one in the image
21:34:53 <strigazi> nothings in nova.cinf
21:34:53 <jakeyip> don't think so, as strigazi say just flavor and images
21:34:56 <flwang> strigazi: ok, i will double check it again
21:35:19 <flwang> jakeyip: are you saying you guys also didn't change the nova.conf, but just the flavor and image?
21:35:29 <strigazi> yes
21:35:39 <jakeyip> yeap it worked for us with flavor + image
21:35:40 <flwang> strigazi: nice, it's much nicer
21:35:44 <flwang> great
21:35:56 <brtknr> flwang: strigazi: perhaps we should add some nodes in the docs to inform users about the hwrng quirk
21:35:57 <strigazi> flavor: properties                 | hw_rng:allowed='True'
21:36:06 <jakeyip> what is the nova.conf option you added? I can check what's in our nova.conf
21:36:09 <brtknr> s/nodes/notes
21:36:41 <strigazi> image  hw_rng_model='virtio',
21:36:41 <flwang> rng_dev_path=/dev/hwrng
21:36:44 <jakeyip> yeah we would like a table of k8s version + os version + magnum version
21:36:53 <flwang> i wonder if there is a default value for that
21:37:22 <flwang> jakeyip: i will start to work out a matrix for that
21:37:36 <flwang> jakeyip: pls help contribute when reviewing it
21:38:08 <brtknr> strigazi: we did the same for our fa29 and it fixed the bootstrapping
21:38:10 <strigazi> https://review.opendev.org/#/c/616603/
21:38:21 <strigazi> nova team doesn
21:38:27 <strigazi> nova team doesn't bother it seems
21:39:26 <strigazi> it is feature in kernels 4.19 or greater
21:39:36 <flwang> strigazi: thanks for sharing that link, we should push that in
21:39:56 <strigazi> the kernel needs more entropy to generate random numbers required somewhere in cloud-init
21:40:20 <flwang> strigazi: btw, did you have a chance to try fc30?
21:40:42 <flwang> given there is cloud-init in fc30, i think we may need a big change for our code?
21:40:48 <flwang> there is no
21:41:17 <jakeyip> is fc30 work being tracked in a story ?
21:42:52 <brtknr> i tried to boot fc30 baremetal and did not get very far
21:43:07 <strigazi> I have tried fedora core
21:43:14 <strigazi> works fine for vms
21:43:22 <flwang> jakeyip: https://storyboard.openstack.org/#!/story/2006209
21:43:41 <strigazi> needs some work, not a drop in replacement
21:43:53 <flwang> strigazi: cool
21:44:01 <flwang> strigazi: pls use https://storyboard.openstack.org/#!/story/2006209 to track the status
21:44:46 <strigazi> what storage class has to do with fedora core?
21:45:18 <brtknr> I think he means this issue: https://storyboard.openstack.org/#!/story/2006348
21:45:36 <brtknr> flwang: ^
21:48:33 <flwang> strigazi: sorry, yes, this one https://storyboard.openstack.org/#!/story/2006348
21:48:38 <strigazi> ok
21:48:42 <flwang> jakeyip: ^
21:49:17 <jakeyip> yeah thanks flwang I saw that
21:49:56 <jakeyip> storageclass is interesting too I might need that too. any wip patches yet?
21:50:51 <strigazi> not sure if StorageClass can be generic enough
21:51:19 <flwang> strigazi: my idea is having a special config as a post-install-script
21:51:38 <flwang> so that each vendor can define their own yaml file
21:52:01 <flwang> for this case, just simple yaml to create the storageclass
21:52:10 <flwang> with kubectl apply -f
21:52:28 <flwang> post-install-yaml
21:53:00 <jakeyip> where is this script going to be located?
21:53:06 <jakeyip> master node?
21:53:13 <flwang> wherever you want
21:53:39 <strigazi> So we are not talking about a patch for storageclass
21:53:40 <flwang> it can be a link pointed to a file on swift
21:53:52 <flwang> strigazi: we're talking about https://storyboard.openstack.org/#!/story/2006209
21:54:18 <flwang> to have a out of box usable storage class
21:54:23 <strigazi> yes, but the proposed design is to have the posthook do it, right?
21:54:51 <flwang> yes, it's just an option
21:55:03 <flwang> i'd like to get inputs from you guys which is the better way
21:55:23 <strigazi> I'm not against it, fine for me
21:56:03 <jakeyip> i kubectl from desktop using KUBECONFIG env var. writing a file to swift might work, but it seems clunky
21:56:39 <flwang> jakeyip: another way we can do is, like the one we have done for the default k8s-keystone-auth policy file
21:56:47 <strigazi> generic url might be better
21:57:02 <flwang> strigazi: yep, i prefer a generic url as well
21:57:19 <strigazi> could be s3, an http server
21:58:03 <flwang> brtknr: any comment?
21:58:29 <brtknr> sorry i was just reading about how ignition works
21:59:42 <brtknr> we were thinking of adding a native support for manila
21:59:54 <strigazi> Anything else to discuss guys? The time is alsmot up
21:59:58 <brtknr> is this too much bloat or would this be desirable
21:59:59 <brtknr> ?
22:00:08 <flwang> strigazi: i'm good, thank you for joining us
22:00:08 <strigazi> ignition + manila?
22:00:28 <brtknr> e.g. get magnum to configure manila as default storage class
22:00:36 <strigazi> flwang: cheers
22:00:46 <brtknr> similar to what you get with google cloud
22:00:53 <strigazi> brtknr: it can't be generic, manila has types
22:01:07 <strigazi> each cloud has different names
22:01:36 <strigazi> eg for "Meyrin Cephfs"
22:01:39 <brtknr> strigazi: so what if MANILA_SHARE_TYPE is defined?
22:01:43 <strigazi> and "Geneva testing"
22:02:03 <jakeyip> just wondering, can we set up a default using labels passed in for magnum cluster?
22:02:10 <strigazi> we have two for all users and more on demand for special users
22:02:24 <jakeyip> e.g. if cinder similar options are passed in for docker-volume-type.
22:03:10 <brtknr> there are already pieces for keystone authentication to generate share secrets
22:03:24 <flwang> jakeyip: pass in what?
22:03:37 <strigazi> We could, eg https://gitlab.cern.ch/strigazi/csi-plugins/blob/master/manila-provisioner.yaml#L82
22:03:43 <brtknr> not the end of the world if this is a post-deployment step...
22:03:47 <strigazi> but it has many params
22:03:51 <flwang> jakeyip: there are too many attributes in a storage class yaml
22:03:57 <flwang> strigazi: +1
22:04:11 <brtknr> yes exactly
22:04:14 <strigazi> I need to leave you guys, shall I end the meeting?
22:04:23 <flwang> strigazi: let's end it
22:04:35 <strigazi> thanks flwang jakeyip brtknr
22:04:41 <flwang> jakeyip: can you see the point? with labels, it's too complicated
22:04:58 <brtknr> thank you!
22:04:59 <flwang> jakeyip: that's why i propose to pass a file/url directly to make things easier
22:05:09 <brtknr> as far as i can see, the only parameter i can see is: type: "Meyrin CephFS"
22:05:10 <strigazi> #endmeeting