15:00:45 <zul> #startmeeting
15:00:46 <openstack> Meeting started Tue Dec 13 15:00:45 2011 UTC.  The chair is zul. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:47 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
15:00:56 <zul> hi welcome to the nova ec2 api team meeting
15:01:02 <zul> the agenda is the following:
15:01:09 <zul> 1. bugs bugs and more bugs
15:01:11 <zul> 2. blueprints
15:01:16 <zul> 3. Openfloor
15:01:26 <zul> #topic bugs bugs and more bugs
15:01:40 <zul> so the current list of bugs that are tagged with ec2 is at the following:
15:01:50 <zul> https://bugs.launchpad.net/nova/+bugs?field.tag=ec2
15:02:19 <zul> one that i want to highlight is something that smoser has reported which is the following:
15:02:31 <zul> https://bugs.launchpad.net/nova/+bug/903405
15:02:33 <uvirtbot> Launchpad bug 903405 in nova "ec2 metadata service extremely unreliable" [Undecided,New]
15:02:48 <smoser> i should comment in that bug. i can't reproduce that. :-(
15:02:54 <zul> oh really?
15:03:02 <smoser> yeah. but i clearly did see it.
15:03:17 <smoser> a re-isntall of devstack cleared away the issue.
15:03:20 <zul> you just pulled in a new a new version or something and it cant be reproduced?
15:03:23 <zul> ok
15:03:33 <zul> well its something to keep an eye on
15:03:40 <smoser> i don't think any source actually fixed the issue.
15:03:51 <zul> ok
15:04:06 <smoser> as i used the same source, just deleted all database state and started fresh.
15:04:27 <zul> so it could have been a database thing not sure
15:04:52 <zul> ok then something to test in the future
15:05:14 <zul> smoser: in the testing you did yesterday how was the performance of the metadata server?
15:05:51 <smoser> i believe bad.
15:05:53 <zul> did you percieve it as still slow
15:05:55 <smoser> but i only ran 2 instances.
15:05:57 <smoser> and it wasn't good
15:06:05 <zul> ok
15:06:07 <smoser> but i dont think that patch has landed in trunk, right?
15:06:13 <zul> i doubt it
15:06:16 <smoser> right.
15:06:24 <smoser> so there would be no reason to expect it to not be as bad as ever.
15:06:26 * bcwaldon shows up late
15:06:31 <zul> hi bcwaldon
15:06:38 <bcwaldon> carry on :)
15:07:02 <smoser> i'm not familiar enough with how nova (bcwaldron probably is) in this regard.
15:07:03 <zul> right so i think we might have to through gerrit and look for ec2 specific stuff and help the review process along
15:07:26 <smoser> but i had a thought that it might make sense to jsut cache the last N results of the metadata query to memory
15:07:30 <zul> since the review process can be slow sometimes
15:07:43 <smoser> and then draw from that in the metadata server code to avoid the DB hit.
15:07:51 <zul> smoser: i think that would be a good idea
15:08:23 <zul> any other comments
15:08:41 <smoser> i just dont know where really maintaining that cache would occur.  ie, what code is there that i could use to keep such a cache alive.
15:09:13 <zul> smoser: do you want to have a look at it and maybe come up with a proposal?
15:09:31 <smoser> then, each request would still make a round trip to the ec2-api server, but it'd just hit cache.
15:09:32 <bcwaldon> we only recently added our first caching piece to nova (in the network manager, I think), so this is all new territory
15:09:56 <bcwaldon> smoser: I'm curious if there's something else slowing you down, I wouldn't imagine a db lookup w/ 2 instances in it could be slow
15:10:20 <smoser> slow ~ .5 secodns or something for a lookup.
15:10:25 <bcwaldon> smoser: that's an eternity
15:10:29 <smoser> right.
15:10:40 <smoser> it goes to multiple seconds when you get lots of instances
15:10:45 <smoser> let me check really quick if i have it up
15:11:05 <smoser> yeah... here
15:11:07 <bcwaldon> you should ping vishy and see what kind of performance he sees with it
15:11:07 <smoser> $ time ec2metadata --local-ipv4
15:11:08 <smoser> 10.0.0.2
15:11:08 <smoser> real    0m 0.35s
15:11:08 <smoser> user    0m 0.00s
15:11:08 <smoser> sys     0m 0.00s
15:11:23 <smoser> thats from an instance, (one of 2 ran in this openstack installation ever).
15:11:33 <smoser> so .35 seconds for one http request.
15:11:48 <bcwaldon> yeah, that's way too long
15:12:03 <bcwaldon> I'm willing to bet there's something else at play here...
15:12:08 <smoser> that is on an all-in-one system... but still.
15:12:19 <bcwaldon> are you using trunk?
15:12:33 <smoser> yes. devstack install yesterday.
15:12:37 <bcwaldon> ok
15:12:48 <smoser> this is a known issue.
15:12:59 <smoser> bug 851159
15:13:00 <uvirtbot> Launchpad bug 851159 in nova "ec2 metadata service is very slow" [High,In progress] https://launchpad.net/bugs/851159
15:13:07 <bcwaldon> ok
15:14:29 <zul> ok so not wanting to beating a dead horse in the bug list there are some patches attached to the bug reports so it would be a good idea to move some of those patches to gerrit so they can get in
15:14:40 <zul> so im going to start doing that this week.
15:14:53 <zul> good idea bad idea?
15:15:16 <zul> im assuming it is a good idea
15:15:25 <zul> so moving on
15:15:32 <zul> #topic blueprints
15:16:07 <zul> so we have a blueprint assigned to us already: https://blueprints.launchpad.net/nova/+spec/aws-api-validation
15:16:56 <zul> the work has already been done in gerrit but the review is getting stale, so i think we might have to contact the author of the patch and drive it through to completion
15:17:11 <zul> sounds good?
15:17:13 <smoser> sounds good.
15:18:36 <zul> i want to go through the list of blueprints to see if there are any other ec2 specific blueprints and see where they are and hope to offer some help to the authors as well.
15:18:58 <zul> with that i want to open the floor for some discussion
15:19:02 <zul> #topic openfloor
15:19:14 <zul> anyone have anything to bring up?
15:19:23 <zul> ttx: i bet you are lurking around
15:20:15 <zul> smoser: do you have anything
15:20:20 <zul> Daviey: do you have anything?
15:20:21 <ttx> busy on a security issue, sorry
15:20:26 <zul> no worries
15:20:42 <zul> Daviey: seems to be afk
15:20:42 <smoser> i dont have anything.
15:20:51 <smoser> other than that i'll say i'm very happy witih devstack at the momeent
15:20:57 <zul> ok anyone else?
15:21:01 <smoser> and the ease in being able to get to a point where you can dtest things.
15:21:07 <smoser> actually.. i guess i do have something
15:21:12 <zul> smoser: cool beans you will have to show me it
15:21:29 <smoser> one piece of work that is bigger than a bread box that is essential to ec2 api is publish-image
15:21:44 <smoser> bug 903345
15:21:45 <uvirtbot> Launchpad bug 903345 in nova "no way to publish-image if nova uses keystone (no EC2_CERT)" [Undecided,New] https://launchpad.net/bugs/903345
15:21:48 <smoser> i opened yesterday.
15:22:25 <zul> yeah i saw this yesterday
15:22:49 <zul> its something that is definently need to be fixed otherwise the cloud-publish scripts would be quite useless
15:22:50 <smoser> that is not just a simple bug fix like many of the other ec2 issues.
15:23:35 <smoser> i'm happy to add "native glance" support to the cloud-publish tools
15:23:57 <smoser> but the immediate issue with that is that currently they result in an ami being output.
15:23:58 <zul> i think alot of people would like that
15:24:25 <smoser> but if you use glance upload, then there is no way to link what you get back (a glance uuid) to a ec2 api image.
15:24:32 <smoser> er... an ami-id
15:24:56 <smoser> so i can't just swap out the "euca-publish-image" and "euca-bundle-vol" with glance-upload.
15:25:07 <zul> riht
15:25:32 <zul> its definently something we need to look at
15:25:44 <smoser> i guess one way to cheat might be to allow nova ec2-api input to take a uuid...
15:26:01 <smoser> DescribeImages uuid would basically then do the conversion for you.
15:26:47 <zul> but then you need to convert it to something euca2ools understands dont you?/
15:27:03 * zul typing is horrendous today
15:27:21 <smoser> well, if the DescribeImages returned the same results that it would if you gave it the ami-id, then wed' be fine
15:27:24 <smoser> glance-publish
15:27:31 <smoser> ec2-describe-images GLANCE_UUID
15:27:34 <smoser> parse result
15:27:37 <smoser> show ami-id
15:27:58 <zul> k
15:28:24 <zul> anyone else have anything?
15:28:44 <zul> if not.....
15:28:47 <zul> thanks for coming
15:28:49 <zul> #endmeeting