Loading…
Hutchinson [clear filter]
Monday, October 3
 

11:00am EDT

MySQL Alternatives
Mysql presents a unique challenge in the deployment of OpenStack. This brainstorming session will look at all of the current technologies and approaches that could be utilized to accomplish a more horizontally scalable solution. Examples include: Multizone cluster Mysql HA (Drizzle HA) SqlAlchemy integration with NOSQL databases.

Monday October 3, 2011 11:00am - 11:55am EDT
Hutchinson

12:00pm EDT

Make VM state handling more robust
We've found a number of cases where the current handling of VM state makes some failure modes difficult to debug and confusing to the customers. For example a VM which takes a long time in building can be terminated, go to shutdown, and then jump back into life when the build completes. There are also a number of cases where the API allows calls that are inconsistent with the current state - which leads to non-deterministic behavior. We propose to better define and control the allowed state transitions, and introduce some specific failure states to make some failure mode more explicit.

Monday October 3, 2011 12:00pm - 12:25pm EDT
Hutchinson

2:00pm EDT

Stable Release Updates
This session is mainly targeted towards distributions, integrators and those that deploy. Openstack moves at an incredible pace, and the target audience for this session usually takes a released snapshot and aims to support it for a given time. This often means that they need to cherry pick fixes back from trunk, or create their own custom patches. It would seem logical to pool this effort together, where the target audience for the session can talk about viability, procedure, policy and collaboration.

Monday October 3, 2011 2:00pm - 2:55pm EDT
Hutchinson

3:00pm EDT

Separating APIs from Implementation of the API
This session is a discussion session that discusses why the OpenStack community and governance boards should consider the API separately from the implementation of the API. The API and the implementation should be separated entirely, including having the PPB (or some other focused group) vote *separately* on the API of a layer separately from the implementation of that layer. This would make it much easier for competing implementations to be written and blessed" as OpenStack compliant. The Object Storage API ould then have non-Python implementations, such as what has been done by the folks at GlusterFS and is being worked on for two other storage systems/vendors. Likewise, if Rackspace IT or some other organization (just using Rackspace IT as an example since I recently spoke with them.. ;) ) have a Java implementation of the Images API, they could go ahead and use that without worrying that their implementation would break other parts of OpenStack. In addition, these folks could propose their implementation for inclusion into "core Openstack". That said, if you separate out the API from the implementation, the idea of "core OpenStack" might be able to change from "a set of mostly Python projects (because that's what things were implemented in...)" to "a set of projects that implement the core OpenStack APIs". Another way to think of it: We don't say that RabbitMQ is "core AMQP". We say that RabbitMQ "implements the AMQP standard". OpenStack should be similar: a standard set of APIs for cloud components, with a set of implementations of those APIs.

Monday October 3, 2011 3:00pm - 3:55pm EDT
Hutchinson

4:30pm EDT

OpenStack Compute API 2.0
What should the 2.0 version of the API look like? If we make the decision to remove backwards compatibility with API 1.1, what does that open up to us? Here are some concepts that have caused concern in 1.1; we can discuss those. More importantly, we can get a vision of what new features the 2.0 API should have. 1. Fully asynchronous model - are there parts of 1.1 we can clean up? 2. Progress indicators - user experience designers love'em: can they be made more realistic or should they be eliminated? 3. How can we best retrieve (near-) real-time information on errors and usage from a server? Right now, all that data is pushed out over a message queue. As a user, I want to see all the problems with a server at once, and not have to build a solution to store and manage those. 4. Fault analysis and healing: should the OS Compute API be able to recognize problems and suggest solutions? (high disk I/O, network bandwidth, etc.) What would the API for that look like? 5. Can we build elasticity into our cloud? How would we define what that looks like to the user?

Monday October 3, 2011 4:30pm - 5:25pm EDT
Hutchinson

5:30pm EDT

Serialization and the WSGI Layer
I'd like to discuss plans to refactor the (mostly common) code shared between Nova, Keystone and Glance that handles serialization of request data, deserialization of response data, and the general architecture of the WSGI pipeline. Brian Waldon did some excellent work in Glance to refactor the tight coupling of the Controller with HTTP request and response handling. I'd like to explore ways that this work can be continued and eventually make its way into the openstack-skeleton project. The initial refactoring of the WSGI pipeline in Glance added a wsgi.Resource class that contains a serializer, deserializer and controller. However, the Controller objects (in both Nova and Glance) continue to raise HTTP errors directly through webob.exc.HTTPX objects. One initial refactoring I'd like to see is the complete decoupling of the controller from the protocol that the server is responding on. Eventually, we want Glance to be able to respond on non-HTTP protocols (such as AMQP), and having an entirely decoupled controller allows greater code reuse.

Monday October 3, 2011 5:30pm - 5:55pm EDT
Hutchinson
 
Tuesday, October 4
 

9:00am EDT

Host aggregates in OSAPI admin extension
In this session we would like to discuss the introduction of the concept of aggregate. An aggregate is a collection of compute hosts that have access to shared capabilities (e.g. Storage, network, etc.) and allow things like hosts' zero-downtime upgrades (for example, via live migration of instances from one member of the aggregate to another), thus causing no disruption to guest instances. The concept of hierarchy of aggregates could also be explored for scalability reasons. We propose to expose such a concept to the admin of the cloud, and at the same time to keep this transparent to the end-user. In fact, the OSAPI already provide some capabilities for host management via admin extensions, but this is very crude at the moment. Further extending the concept means that putting a host into 'maintenance mode' implies that all the instances running on it can migrate onto an available host in the aggregate, thus causing no disruptions to guest instances. Bringing such capability in the API (and consequently in the orchestration infrastructure) also means that the admin can intervene on the plaform, and at the same time be agnostic of the kind of hypervisor that the host is running, which is very beneficial.

Tuesday October 4, 2011 9:00am - 9:25am EDT
Hutchinson

9:30am EDT

VM disk management in Nova
From discussions on the ML, there are some differences in how disk management is currently implemented in Nova. This session would discuss the differences between the virt layers and attempt to come to a consensus for a single way to manage disks. Current discussions have been around fixed disks, with optional expandable secondary disks or single disk that is expanded. I would like to discuss these options as well as others.

Tuesday October 4, 2011 9:30am - 9:55am EDT
Hutchinson

10:00am EDT

Integration Test Suites and Gating Trunk
There has been a proliferation of custom integrated test suites and frameworks over the last year. Some of these include: * Stacktester (https://github.com/rackspace-titan/stacktester/) * Kong (https://github.com/cloudbuilders/kong) * Backfire (https://github.com/ohthree/backfire) * Proboscis (https://github.com/rackspace/python-proboscis) An effort to consolidate some of these efforts into a unified integrated test suite for OpenStack has been started and is available at https://github.com/sorenh/openstack-integration-tests. Some discussion about the effort occurred on the mailing list: https://lists.launchpad.net/openstack/msg04014.html. This session is to discuss the new, singular place to house integration tests of OpenStack and bring everyone's ideas to the table, assign some action items to interested parties, and make sure all interested parties have a clear roadmap for the integration test suite. Lets talk about a possible plan to gate commits so that things don't break quite so often. Things to consider: -Cross service? (keystone, glance, nova) -Packages or source installs? -Config management? (chef, puppet) -How to maintain? If we use packages then which packages control commits into trunk (Ubuntu, Debian, both?) -Which configurations are we gating against (libvirt, XenServer, MySQL, Postgres, etc.)

Tuesday October 4, 2011 10:00am - 10:25am EDT
Hutchinson

11:00am EDT

Dashboard, Nova, Keystone and RBAC
Discuss role base access controls and our strategy for defining and implementing this with Keystone in a standardized way.

Tuesday October 4, 2011 11:00am - 11:55am EDT
Hutchinson

12:00pm EDT

OpenStack Common
OpenStack is comprised currently of three official projects, one incubator project, and a number of satellite projects: Compute/Nova (Official) Image/Glance (Official) Object/Swift (Official) Identity/Keystone (Official as of Essex) Network/ (Satellite) Queue/Burrow (Satellite) Volume/Lunar (Satellite) Database/Red Dwarf (Satellite) ...and more! Many of these projects share a large amount of code which has been copied, pasted, and modified to fit new scenarios. I would like to propose: 1) The creation of `openstack.common` Python module to be hosted on GitHub. 2) Official guidelines for including code into `openstack.common` including a long list of DOs and DONTs. 3) Timeline and steps for moving forward, as well as a discussion as to potentially pitfalls.

Tuesday October 4, 2011 12:00pm - 12:25pm EDT
Hutchinson

2:00pm EDT

Advanced Scheduling
Since the Diablo Summit, there have been a number of advances with Zones and Distributed Scheduler (particularly with the Least Cost scheduler) Additionally, there are still a lot of groups that have specific needs of the scheduler, such as: 1. the use of different metrics for making scheduling decisions, such as InstanceType, compute host ports/interfaces, etc 2. scheduling for networks and volumes 3. different cost/weighing calculations. This session is intended to talk about the requirements of these other groups and try to unify the solutions. We only have 55 minutes for this session, so let's try to get all the issues outlined on this wiki page first: http://wiki.openstack.org/EssexSchedulerSession

Tuesday October 4, 2011 2:00pm - 2:55pm EDT
Hutchinson

3:00pm EDT

Private/public cloud integration (bursting)
In a perfect world, private clouds would be able to automatically expand to take advantage of public clouds (aka bursting). This raises issues of authentication, authorization, and federated management of those roles and responsibilities. Can we come up with a model for how to handle this in Essex? I know that a lot of people have been thinking about this, but there does not appear to be a solid consensus at the moment.

Tuesday October 4, 2011 3:00pm - 3:25pm EDT
Hutchinson

3:30pm EDT

Waiting for Keystone - Hacking AuthZ short-term
While the ultimate goal is using Keystone for AuthZ in Nova we may need an interim solution until Keystone is fully ready. This session will talk, at a code level, what might be possible to do in Nova until Keystone is ready. How will authorization checks get performed? Decorators? Explicit if-statements? When will authorization checks get performed? In API? In Server.API? in the service itself? How will we configure this interim solution without writing a whole lot of code that will be thrown away later? .conf? db? json? .py? Will we require all Nova resources to belong to Resource Groups in order to keep the AuthZ-service calls to a minimum? Will this work across Zones? Will this work in Federated environments? How will this interim solution get replaced when Keystone is ready?

Tuesday October 4, 2011 3:30pm - 3:55pm EDT
Hutchinson

4:30pm EDT

Integration and future of Novaclient
Novaclient currently lives on github, but it is a core part of Nova as it is required for Zones. Do we think it's time to bring it in from the cold? Issues: 1. a client that tracks the bleeding edge of dev 2. importance of an externally maintained client tool 3. linkage ... how to integrate That said, the overall design of novaclient to nova is a very laborious process. There are other design possibilities that we'd like to talk about as well. So, our secondary agenda is: Explore a new implementation of Novaclient: - rebrand as OpenStack Compute Client and lose Nova-specific functionality? - should we base it on an IDL or continue with the existing approach? - what limitations will there be if we choose to convert to an IDL-based implementation?

Tuesday October 4, 2011 4:30pm - 5:25pm EDT
Hutchinson

5:30pm EDT

Nova Image cache management
Nova loads images on demand from Glance, which may then be shared by multiple VMs via copy on write layers. This resulting image cache on each compute server is currently unmanaged, and performing any audit / clean-up operations on it over a lagre number of compute servers is a non trivial opertaional procedure. This session would explore the concept of a cache manager to identify the list of images and thier current state with respect to thier usage by VMs and if they still exist on glance. Based on this state of cached images the cache manager would be able to take a number of configurable actions, such as: - Deleteing images in the cache that are no longer used (or have not been used for some period) by VMs - Deleteing images in the cache that are no longer uesd by VMs and are no longer present in Glance - Preloading specific images, for example by matching on specific Glance Metadata - By refering to meta-data associated with the images track the usage of licenced Operating Systems Preloading will need to need to implement a mechanism to ensure that compute servers do not all request the same image from glance concurrently. Once image state data is available it could be used by an scheduler enhancment to place VMs on servers which already have a cached copy of the required image

Tuesday October 4, 2011 5:30pm - 5:55pm EDT
Hutchinson
 
Wednesday, October 5
 

9:00am EDT

Cobbler Integration
Integrate cobbler into nova in order to boot bare metal machines.

Wednesday October 5, 2011 9:00am - 9:25am EDT
Hutchinson

9:30am EDT

XenServer KVM feature parity plan
Right now there are quite a few feature gaps between KVM and XenServer. The goal of this session is to document all known gaps, decide on an action plan for getting them resolved, create bugs/blueprints for all of them, and (hopefully) assign them.

Wednesday October 5, 2011 9:30am - 10:25am EDT
Hutchinson

11:00am EDT

Nova: Support for PCI Passthrough & SR-IOV
Discussion about adding support for PCI-passthrough & SR-IOV enabled devices - Inventory management (manual vs auto-discovery of available resources) - Capability reporting and Scheduling - Implementation on different platforms: Libvirt, Xen, HyperV, VMware - Live & Offline Migration & possible other aspects Related BP: https://blueprints.launchpad.net/nova/+spec/pci-passthrough Merge proposal (first cut): https://review.openstack.org/#change,776

Wednesday October 5, 2011 11:00am - 11:25am EDT
Hutchinson

11:30am EDT

Nova Upgrades
Discussion of possible changes to Nova to support transparent updates. There are a number of ways to go about upgrading Nova with service impacts that range from no impact to a complete outage. Making upgrades as transparent as possible will involve carefully controlling the dependencies between components and the order in which various components are upgraded. This session will be a discussion of changes to Nova that will minimize the impact of future upgrades on users.

Wednesday October 5, 2011 11:30am - 11:55am EDT
Hutchinson

12:00pm EDT

EC2 Feature Review
Review EC2 features that is currently missing from nova.

Wednesday October 5, 2011 12:00pm - 12:25pm EDT
Hutchinson

2:00pm EDT

The future of nova-volume
This is a catch-all session to discuss where nova-volume needs to go to get it up to snuff. Some important pieces: * integration of cleanup code for separation from compute * special schedulers based on type * zone support * separate api endpoint?

Wednesday October 5, 2011 2:00pm - 2:55pm EDT
Hutchinson

3:00pm EDT

Notification and error handling (orchestration)
Nova currently ships error messages and notifications outside the system via a message queue. This has the downside that those errors are not available for diagnostics, and storing those errors breaks the simple model of Nova's architecture. The 1.1 API specification uses asynchronous notifications to access this data, but it is at best a mediocre solution. This session would be to brainstorm on various proposals for the 2.0 API and understand the best way to ensure that Nova users can track errors.

Wednesday October 5, 2011 3:00pm - 3:55pm EDT
Hutchinson

4:30pm EDT

IPv6 call to arms
We (Cybera) have tried to deploy OpenStack on IPv6 with mixed results, see http://www.cybera.ca/tech-radar/using-openstack-with-ipv6. IPv6 support in OpenStack could use improvements in the code and documentation. At a minimum, the purpose of this session would be to discover the landscape of IPv6 usage in the OpenStack community. Who is running OpenStack on IPv6? How are they using it? What challenges have they faced? Ideally we would also brainstorm the way forward for IPv6 support. Identify the areas that need the most work. Discuss potential collaborations with people willing to contribute time to improving IPv6 code and documentation. Started an etherpad at http://etherpad.openstack.org/nBJazPhggD and already filled in some info to get the ball rolling when the session starts.

Wednesday October 5, 2011 4:30pm - 4:55pm EDT
Hutchinson

5:00pm EDT

Increase Security (incl. new secure root wrapper)
In this session we'll discuss the potential threats to Nova internal security and their mitigation, including: * Information disclosure Log files and configuration files contain clear text passwords. * Root escalation In Diablo the direct use of sudo was abstracted to allow to plug another root escalation wrapper. That leaves room for implementing a more secure root wrapper that would filter arguments more precisely.

Wednesday October 5, 2011 5:00pm - 5:25pm EDT
Hutchinson

5:30pm EDT

Discover Diablo Networking Modes
Quantum and Melange will be here soon, but until then, we have to get by with the existing networking modes. We'll discuss the plethora of networking options, how they can be hacked and tweaked, and how they are implemented. We might even discuss some other potential features that could be added.

Wednesday October 5, 2011 5:30pm - 5:55pm EDT
Hutchinson
 
Filter sessions
Apply filters to sessions.