20:37:51 <SlickNik> #startmeeting reddwarf
20:37:52 <openstack> Meeting started Tue May 14 20:37:51 2013 UTC.  The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:37:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:37:55 <openstack> The meeting name has been set to 'reddwarf'
20:38:03 <SlickNik> #topic Update to action items
20:38:20 <SlickNik> #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-05-07-21.05.html
20:38:34 <SlickNik> First one is mine.
20:38:53 <SlickNik> I haven't had a chance to look into archiving the logs yet.
20:39:05 <SlickNik> Got pulled into working on other stuff.
20:39:14 <SlickNik> So I'm going to re-action this.
20:39:43 <SlickNik> #action Slicknik to look into archiving logs for rdjenkins test runs.
20:39:54 <cp16net> awwww
20:40:15 <SlickNik> datsun180b: you got the second one..
20:40:15 <cp16net> SlickNik: i was hopin i would come back from vacay and it would be done :-P
20:40:16 <datsun180b> Next I'm up. I managed to pull a couple of the gerrit changesets and run them with no problem on my machine, but I haven't had a chance to figure out what the delta between jenkins and my own box is
20:40:17 <cp16net> haha
20:40:45 <datsun180b> Besides that, the jobs appear to be working with unprecedented consistency as of late
20:40:50 <SlickNik> datsun180b: I've only seen it fail on the cloud instances.
20:40:56 <SlickNik> datsun180b: and it's intermittent
20:41:07 <SlickNik> datsun180b: Yeah, seems to be happening less off late too.
20:41:22 <grapex> So its interesting, hub_cap couldn't run it on the Rackspace cloud due to the resize tests never finishing IIRC
20:41:39 <grapex> It seems like we may just hit these issues in general when running on a cloud
20:42:03 <robertmyers> maybe a longer timeout?
20:42:22 <cp16net> run it manually and see how long it takes
20:42:23 <SlickNik> grapex: perhaps that's the case. I'm inclined to put this on the back burner for now and keep an eye out for it happening again.
20:42:27 <cp16net> maybe it is just taking longer
20:42:29 <grapex> Maybe we should consider running only a subset of the real mode tests on the reddwarf Jenkins box.
20:42:35 <datsun180b> #agree
20:42:51 <cp16net> grapex: i dont think thats a good idea
20:42:53 <grapex> My issue is I feel the Reddwarf Jenkins box has been failing a lot of pull requests, which causes them to not get looked at and slows things down
20:42:57 <datsun180b> Moving forward I'd love to remove as many free radicals as possible
20:43:04 <grapex> We'd run everything except for the resize tests.
20:43:26 <grapex> cp16net: I'm not saying we canonically get rid of them, I just don't think we have the environment to run these tests now
20:44:02 <SlickNik> grapex / cp16net: In the near future, I want to figure out how moving to openstack-ci will affect this too.
20:44:14 <grapex> It'd be interesting to know if there are issues with resize related tests in Tempest on openstack-ci
20:45:06 <cp16net> thats a good point
20:45:30 <SlickNik> I think we should follow up with hub_cap / ttx / openstack_ci to figure out what the next steps would be to get integrated with devstack_vm_gate.
20:45:34 <cp16net> action item for someone to look into that?
20:45:59 <SlickNik> any volunteers?
20:46:20 <grapex> I can't speak for hub_cap but he had mentioned that recently
20:46:22 <datsun180b> i wasn't real helpful except as a control group this last week
20:46:39 <SlickNik> #action SlickNik to follow up with hub_cap / openstack_ci to see what the next steps are.
20:46:59 <SlickNik> cool, I can follow up on it.
20:47:05 <datsun180b> thanks for that
20:47:08 <SlickNik> no worries.
20:47:12 <SlickNik> let's keep moving.
20:47:23 <datsun180b> I think that means you just got #3
20:47:24 <grapex> How about a second one
20:47:31 <SlickNik> robertmyers, you're next.
20:47:36 <grapex> Wait
20:47:48 <SlickNik> sure, grapex?
20:47:49 <robertmyers> okay, the notification pull request is passing now
20:47:58 <grapex> Let's create an action item to look into if there are resize test problems in Tempest and see if maybe we can determine a fix
20:48:45 <SlickNik> That's a good idea.
20:49:00 <SlickNik> grapex: do you want to follow up on that?
20:49:07 <grapex> Sure
20:49:22 <grapex> #action GrapeX to determine if Tempest community has similar issues with resize.
20:49:27 <SlickNik> thanks!
20:49:42 <SlickNik> moving on to the notifications patch.
20:49:47 <datsun180b> back to you SlickNik for #5
20:49:54 <SlickNik> Thanks robertmyers, it looks like it's passing.
20:50:12 <robertmyers> I think it is ready, just needs a +2
20:50:20 <robertmyers> or more eyes onit
20:50:36 <datsun180b> i think we just need more eyes in general
20:50:40 <robertmyers> #link https://review.openstack.org/#/c/26884/
20:50:46 <robertmyers> go now
20:51:03 <cp16net> datsun180b: like flies?
20:51:07 <datsun180b> well my +1 only means so much
20:51:13 <SlickNik> compound eyes.
20:51:35 <cp16net> i had issues this morning signing into review these
20:51:40 <cp16net> lookslike its working now tho
20:51:50 <cp16net> i'll look at it today/tonight
20:51:56 <SlickNik> I +2'ed. hub_cap had some comments, so was waiting for either him / grapex to approve.
20:51:58 <datsun180b> but yes, more eyes in general
20:52:00 <datsun180b> #link https://review.openstack.org/#/q/is:watched+status:open,n,z
20:52:13 <grapex> SlickNik: Good point
20:52:39 <SlickNik> Just wanted to make sure we got all the comments addressed.
20:52:56 <SlickNik> okay, #5.
20:53:05 <SlickNik> Didn't get a chance to look into it yet.
20:53:06 <grapex> Reminder- if you want something to get into trunk, and you comment on it, be sure to +1 it later! Otherwise it can look like there's concern over it and will make people not look at it. I've done the same thing myself...
20:53:41 <grapex> SlickNik: Can +1's in bulk get something merged?
20:54:00 <SlickNik> nope, grapex.
20:54:18 <grapex> SlickNik: Ok, thanks
20:54:19 <datsun180b> all right, i had my concerns
20:54:25 <cp16net> yeah only approval merge
20:54:42 <SlickNik> for a merge, a patch needs at least one +2 and an "Approved"
20:55:19 <SlickNik> That's the end of action items.
20:55:50 <datsun180b> So what else to discuss?
20:55:52 <SlickNik> #topic TC
20:56:00 <SlickNik> Just walking through the agenda
20:56:15 <SlickNik> Anything to discuss here?
20:56:19 <cp16net> i dont think anyone updated the agenda...
20:56:27 <cp16net> that was old i believe
20:56:38 <hub_cap> hey looks like we agreed on 30min early ;)
20:56:42 <SlickNik> cp16net: I think you're right.
20:56:43 <cp16net> btw... we are incubation
20:56:46 <cp16net> :-P
20:56:51 <SlickNik> welcome hub_cap.
20:56:57 <SlickNik> cp16net: yay! :P
20:57:00 <cp16net> hub_cap: yeah it was a surprize to be as well
20:57:10 <cp16net> moving on
20:57:24 <grapex> hub_cap: I was momentarily surprised, but am used to constantly learning things late and so quickly got over it and accepted the time as the new reality.
20:57:28 <SlickNik> hub_cap any updates on next steps for incubation?
20:57:32 <hub_cap> not yet
20:57:43 <cp16net> live in the now
20:57:51 <hub_cap> exactly
20:57:59 <SlickNik> #topic OpenVZ
20:58:04 <hub_cap> sry i just gotback from lunch, working on some super secret stuffs ;)
20:58:15 <grapex> hub_cap: perl rewrite?
20:58:17 <imsplitbit> yay openvz
20:58:23 <cp16net> go?
20:58:35 <grapex> Go... so hot right now. Go.
20:58:41 <SlickNik> no worries. Just got done with the action items and working through the rest of what looks like last  weeks agenda. :P
20:58:47 <imsplitbit> currently I'm merging in the migration code we wrote internally and will be releasing it into the wild
20:58:54 <imsplitbit> it's in my public repo
20:58:57 <imsplitbit> hold for link
20:59:03 <SlickNik> holding.
20:59:10 <imsplitbit> #link https://github.com/imsplitbit/nova/tree/openvz_support
20:59:13 <SlickNik> thx
20:59:29 <grapex> imsplitbit: This needs some ascii art on the README
20:59:43 <hub_cap> #agreed
20:59:43 <imsplitbit> the code is currently merged in but I need to add more unittests for the migration code
20:59:46 <cp16net> #agreed with grapex
20:59:50 <imsplitbit> and then test in my lab
20:59:59 <imsplitbit> then add ascii art to the README
21:00:12 <imsplitbit> should be done by EOB monday
21:00:13 <grapex> imsplitbit: So will this code as time goes on need to be constantly kept up to date with Nova trunk?
21:00:13 <cp16net> ok then we can put the stamp of approval on it
21:00:28 <imsplitbit> grapex: well I think we just tag releases
21:00:29 <grapex> I know at one point there was an idea of making it only unique files with a few patches to existing files
21:00:38 <imsplitbit> this one will be for havanna
21:00:41 <imsplitbit> havana
21:00:42 <cp16net> imsplitbit: yea that sound like a good idea
21:01:06 <imsplitbit> I should be able to be backported to grizzly
21:01:14 <imsplitbit> but right now it's 100% bleeding edge
21:01:26 <cp16net> awesome
21:01:28 <grapex> Cool
21:01:29 <SlickNik> gotcha.
21:01:32 <SlickNik> next up
21:01:45 <SlickNik> #topic Jenkins
21:02:01 <SlickNik> Things seem much better in rdJenkins world.
21:02:31 <datsun180b> It would seem so, but I'm still keeping an eye on it
21:02:33 <grapex> Did anything change redstack script or tests related to cause that?
21:02:42 <SlickNik> Most builds seem to pass, and no more false positive.
21:03:31 <SlickNik> Well, we nailed down the right regex to use, and Matty made some needed fixes to the Jenkins cloud instance  plugin.
21:03:41 <datsun180b> I don't know aobut "no more", but it seems they're greatly reduced compared to say two weeks ago
21:04:04 <datsun180b> But even last week I had something get thrown to "abandoned" because rdjenkins zapped it and a week passed
21:04:15 <grapex> datsun180b: What day was that?
21:04:16 <datsun180b> I did notice that rdjenkins stopped working from :8080 and seems to have gone to :80
21:04:22 <SlickNik> datsun180b: I haven't seen one in two weeks. If you see one, let me know.
21:04:30 <SlickNik> It's only SSL right now.
21:04:33 <datsun180b> Well I woke it up yesterday, let me find the link
21:04:36 <SlickNik> So 443, I believe.
21:04:56 <SlickNik> #link https://rdjenkins.dyndns.org
21:05:04 <datsun180b> #link https://review.openstack.org/#/c/28061/ for example
21:05:17 <datsun180b> All I did was wake it up and it all passed again
21:06:00 <datsun180b> So whatever you're feeding jenkins now, keep it up
21:06:29 <SlickNik> heh, will do
21:06:41 <SlickNik> anything else Jenkins related?
21:06:42 <cp16net> steroids?
21:07:01 <SlickNik> actually powerthirst.
21:07:06 <grapex> datsun180b: So are these failures resize?
21:07:12 <SlickNik> http://www.youtube.com/watch?v=qRuNxHqwazs
21:07:22 <datsun180b> i think it was a failure to upload to glance lacking a table called 'reddwarf'
21:07:26 <grapex> Because if not I'd rather not commit to that action item if the issues are not related to the resize tests.
21:07:37 <SlickNik> oh datsun180b: that was when devstack broke us.
21:07:51 <grapex> SlickNik: It's got what tests need. :)
21:07:52 <datsun180b> gotcha, sounds like we found the loose bearing
21:08:19 <SlickNik> yup, I can send you the patchset that fixed it, fyi.
21:08:23 <SlickNik> after the meeting.
21:08:33 <datsun180b> you know where to find us
21:08:35 <SlickNik> okay, let's move on.
21:08:39 <SlickNik> yup!
21:08:53 <SlickNik> #topic Backup Status
21:09:09 <SlickNik> Got a bunch of comments from you guys.
21:09:27 <SlickNik> Working on addressing them and uploading a new patchset.
21:09:32 <robertmyers> most of mine were just nits, but it looks good
21:09:34 <SlickNik> So stay tuned for more on that.
21:09:36 <grapex> Cool
21:09:59 <SlickNik> nothing more on that.
21:10:12 <SlickNik> #topic Notification Plan
21:10:43 <SlickNik> Let's get robertmyers' patch merged.
21:10:54 <robertmyers> yes
21:11:05 <robertmyers> and start talking exist events
21:11:05 <juice> working on getting the exists event stood up for our billing team to test
21:11:25 <datsun180b> i'll +1 it legit, it's been in an open tab all day
21:11:38 <SlickNik> yup. thanks robermyers and juice for the awesome work on this.
21:11:39 <robertmyers> juice: are you doing a public code or private?
21:12:23 <juice> public
21:12:46 <juice> just need to get patch submitted so saurabh can deploy to a test env. here
21:12:56 <juice> and then I'll iterate over it to improve
21:12:57 <SlickNik> I believe the idea is to do something along the lines of the way nova does it.
21:13:07 <robertmyers> in taskmanager?
21:13:30 <SlickNik> yes, in taskmanager...
21:13:33 <juice> task manager would be the best place to put it
21:13:33 <SlickNik> juice?
21:14:18 <SlickNik> okay any more notifications related info?
21:14:58 <SlickNik> ...
21:14:59 <SlickNik> #topic Rootwrap
21:15:12 <SlickNik> I don't think anyone has looked into this.
21:15:22 <datsun180b> not i, said the Datsun
21:15:25 <cp16net> SlickNik: is there a blueprint on exist events?
21:15:48 <cp16net> sorry late to that party
21:15:55 <SlickNik> cp16net: I don't think so. juice?
21:16:24 <SlickNik> hub_cap: We can haz blu print, plz?
21:16:24 <robertmyers> #link https://blueprints.launchpad.net/reddwarf/+spec/reddwarf-notifications
21:16:27 <juice> nope
21:16:37 <juice> oh wait there is one! ;)
21:16:40 <hub_cap> heh
21:16:41 <SlickNik> oh, tahnks robertmyers
21:16:56 <SlickNik> Looks like it's part of the original notifications blue print.
21:17:22 <SlickNik> So let's keep on with updating that.
21:17:36 <cp16net> ok
21:18:13 <SlickNik> As for the rootwrappah, I think we're gonna pass on that one till we have some more bandwidth to work on it.
21:18:44 <SlickNik> So it might be a while before we tackle it.
21:19:40 <SlickNik> moving on
21:19:56 <SlickNik> #topic Actions / Action Events
21:21:09 <SlickNik> I believe we de-prioritized that for the moment based on all the actions that came out of incubation...
21:21:38 <grapex> SlickNik: I think hub_cap has been busy, so no news on that front.
21:21:50 <SlickNik> yup, that was my understanding.
21:22:00 <hub_cap> def grapex, no news
21:22:04 <SlickNik> so that brings us to...
21:22:07 <hub_cap> ill be finishing it soon, prolly nxt wk
21:22:20 <SlickNik> #topic Meeting Time
21:22:44 <datsun180b> ooh ooh
21:22:45 <cp16net> so....
21:22:50 <datsun180b> can we talk about ephemeral storage
21:22:57 <esmute> yes
21:23:07 <datsun180b> 1. what is it
21:23:23 <cp16net> 2. make a blueprint?
21:23:29 <datsun180b> ^^
21:23:33 <esmute> sorry.. i was assumed that it was mentioned last meeting which i didnt attend to
21:23:33 <cp16net> or let us help you make it :)
21:23:43 <datsun180b> we can give you a blank BP if you need it
21:23:55 <esmute> there is a bug https://bugs.launchpad.net/reddwarf/+bug/1175719
21:24:31 <esmute> we can convert that bug into a bp if you want
21:24:53 <datsun180b> Please do
21:25:33 <grapex> esmute: I've got a question- how come on the models, we're renaming what is called "ephemeral" by Nova to "storage?": https://review.openstack.org/#/c/28751/3/reddwarf/flavor/models.py
21:25:42 <esmute> here in HP we need not only support for Volumes and root partition, which is controlled by the reddwarf_volume_support flag
21:26:00 <cp16net> esmute: can you use this plz
21:26:01 <cp16net> https://blueprints.launchpad.net/reddwarf/+spec/ephemeral-storage-volume
21:26:05 <esmute> we also need to have a support to storing mysql data in ephemeral partitions
21:26:55 <esmute> grapex: we thought that the term "ephemeral" maybe too confusing for customer...and we didnt want to use "disk" because it overlapped with the "disk" from nova
21:27:01 <esmute> so we decided on "storage"
21:27:16 <esmute> will do cp16net
21:27:17 <datsun180b> So ephemeral is an alternative to volume or root storage
21:27:17 <SlickNik> grapex: also because — like volumes — all of the ephemeral drive will be available as mysql 'storage' for the database instance.
21:27:18 <grapex> esmute: So this is funny- hub_cap, SlickNik, your thoughts on this might be helpful-
21:27:26 <grapex> we originally made Flavors look like Nova flavors on purpose
21:27:31 <grapex> as a convience to the customers
21:27:38 <grapex> But the TC seemed to not like that
21:27:56 <grapex> however, renaming flavor attributes seems confusing to me. Clearly it's a Nova flavor, so why rename the fields?
21:28:09 <hub_cap> i dislike the rename of fields
21:28:22 <hub_cap> maybe we should have a "moved" and point to the nova install if they try to get flavors
21:28:30 <cp16net> i dislike change fields purpose
21:29:08 <SlickNik> I'm not sure I know exactly where I  stand on this one yet.
21:29:10 <esmute> so you guys like to leave it as "ephemeral"?
21:29:28 <cp16net> like the reddwarf_volume_support was true/false and now you repurpose it to be 3 different values?
21:29:34 <grapex> I'd prefer to leave it as "ephemeral", but honestly I need to research a bit more on ephemeral flavors before I know what I think.
21:29:52 <hub_cap> so you are mounting something from the host i assume for this?
21:30:02 <SlickNik> Might need to read up / think a bit more about this one.
21:30:14 <esmute> cp16net: yes.. because originally, there were only two options.. volumes/root partition
21:30:25 <cp16net> no it was volumes on or off
21:30:27 <esmute> now we wanted to support another option.. ephemeral
21:30:33 <esmute> yes..
21:30:34 <SlickNik> hub_cap: it's auto mounted as vdb when it's part of the flavor.
21:30:51 <hub_cap> we might want to change this to some sort of strategy
21:30:52 <esmute> when you boot an instance with ephemeral, a new partition is made available..../dev/vdb
21:30:57 <grapex> esmute: Wouldn't ephmeral just mean the flavor is ephemeral and volume support is off?
21:31:07 <hub_cap> reddwarf.compute.volume.Ephemeral etc etc
21:31:08 <esmute> which can be mounted and used the same way as volume-support
21:31:14 <hub_cap> and it can stay "disk"
21:31:19 <datsun180b> It sounds like the scope of ephemeral storage is bigger than our meeting allotment
21:31:23 <hub_cap> u have to ask yourself this
21:31:33 <hub_cap> does the customer care whether they get ephemeral or some other "disk"
21:31:33 <grapex> Ok, I get it- the ephmeral part is for the unused disk part of flavors
21:31:41 <grapex> which maybe we now need to be aware of in the Reddwarf API
21:31:48 <grapex> Ok, this is a huge philosophical can of worms. :)
21:31:49 <hub_cap> or is this just for the people that stand up and manage the service
21:31:55 <esmute> grapex: no. because originally when you had reddwarf_volume_support ==false, the data was stored in the root partition
21:32:01 <hub_cap> to me, the customer cares about "local" vs "remote"
21:32:20 <hub_cap> technically the disk _in_ the image is ephemeral right?
21:32:26 <datsun180b> I'm concerned that they'll see "storage: 0" in a 4G flavor and pick up the big red phone
21:32:32 <esmute> hub_cap: so you are saying "local" == ephemeral?
21:32:37 <hub_cap> ya
21:32:41 <cp16net> techinally yes
21:32:46 <hub_cap> when u dleete an instance esmute, does it not go away?
21:32:52 <hub_cap> but the strategy for how your partition it is up to the implementors
21:33:01 <hub_cap> youre saying u want 2 partitions instead of 1
21:33:02 <cp16net> well if it reboots it does not disappear
21:33:03 <hub_cap> thats all
21:33:14 <hub_cap> sure nither does the vm cp16net
21:33:36 <esmute> do we foresee a case where we would want to store the data in the root partition?
21:33:47 <esmute> ie /dev/vda
21:34:02 <esmute> hub_cap: yes to your question
21:34:20 <hub_cap> right what im getting at is
21:34:23 <hub_cap> does the customer care
21:34:36 <hub_cap> if they see flavor disk=100 or ephemeral=100 does it make a diff to them
21:34:53 <hub_cap> or do they see disk=100 and the people running it in the background say, lets do a 100g vol called vdb
21:35:20 <datsun180b> If I'm reading these changesets right though, if we're not using ephemeral then we're going to see "storage: 0" in our flavors
21:35:33 <SlickNik> hub_cap: but these are different in nova. And nova flavors expose them separately.
21:35:34 <datsun180b> confirm / deny
21:35:36 <grapex> Any chance we can take this discussion offline?
21:36:02 <SlickNik> datsun180b: Yeah, I agree. That's probably a good case for renaming storage to ephemeral.
21:36:03 <cp16net> i second that
21:36:06 <esmute> datsun180b: i am about to change that.. if we are not using ephemeral, we wont display it
21:36:06 <hub_cap> grapex: +1
21:36:21 <SlickNik> esmute: that is a good solution too.
21:36:22 <datsun180b> good to hear on both counts
21:36:26 <SlickNik> Yup ++ to grapex
21:36:33 <datsun180b> if that's the case I'll scrap these comments
21:36:42 <esmute> one question i have
21:36:47 <datsun180b> ga
21:36:51 <SlickNik> Let's take this offline.
21:36:59 <SlickNik> Looks like we have some more to discuss here.
21:37:05 <datsun180b> back to #reddwarf then?
21:37:22 <SlickNik> Sure.
21:37:31 <esmute> for you guys, with local storage, it doesnt matter if its stored in root or in ephemeral partition right?
21:37:34 <SlickNik> #topic Open Discussion.
21:37:49 <esmute> as long as it is "local"
21:38:10 <hub_cap> i like cheese
21:38:10 <SlickNik> Any other items for discussion?
21:38:24 <esmute> hub_cap: pepperjack is my fav
21:38:41 <grapex> Tillamook Smoked Extra Sharp Cheddar
21:38:46 <datsun180b> cheddar or die
21:38:48 <grapex> Let's start the academy aware music
21:38:52 <grapex> *award
21:39:07 <SlickNik> Wensleydale for the win...
21:39:17 <SlickNik> okay, sounds good.
21:39:17 <cp16net> i think we are done.
21:39:23 <grapex> Alright, this was a good one.
21:39:24 <SlickNik> #end meeting
21:39:24 <hub_cap> salty blue cheese FTW
21:39:27 <datsun180b> thanks everyone
21:39:29 <SlickNik> #endmeeting