Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Monday, October 3
 

9:00am

Welcome and Introduction to the Summit

Introductionon how the Summit works.


Monday October 3, 2011 9:00am - 9:25am
3rd floor

9:30am

Quantum API v1.1
The goal of this session is to discuss and agree on the improvements that should be made to Quantum API for the Essex release. Improvement to Quantum API should be aimed at making it easier for client applications to consume the API itself. Potential improvements include: - Concept of "operational status" for resource in order to accomodate asynchronous behaviour by plugins - Provide the capability of specifying filters on API requests - Support paginated collections on responses - Provide ATOM links to resources on response (Also make them permanent and version-independent) - Add a Rate Limiting middleware layer

Monday October 3, 2011 9:30am - 10:25am
Wheeler

9:30am

Restful Proxy Service Engine (REPOSE)
In organizations, integrating mutually interacting software applications is a common problem. This problem is addresses in the enterprise by an Enterprise Service Bus (ESB) or Enterprise Application Integration (EAI). The problem is addressed in the Cloud by vendors such as Apigee and Mashery. This talk introduces Rackspace's proposed Restful Proxy Service Engine. REPOSE is like a traditional ESB, except that: the interface is REST; the protocol is HTTP and the scale is cloud. We will address the need for a cloud-scale service bus, the architectural limitations that prevent ESBs from scaling to the cloud, and the benefits of building a cloud-scale service bus into OpenStack, the next steps and how you can help.

Monday October 3, 2011 9:30am - 10:25am
Salon A

9:30am

Unconference
Monday October 3, 2011 9:30am - 10:25am
3rd floor

10:30am

Break

Introductionon how the Summit works.


Monday October 3, 2011 10:30am - 11:00am
3rd floor

11:00am

swift monitoring
discover what to monitor in a swift cluster

Monday October 3, 2011 11:00am - 11:25am
Salon A

11:00am

Quantum Task Round-up
There are a large number of tactical tasks that we need to tackle for Quantum to become a more mature project and to improve integration with other OpenStack projects. This will be a "lightning talk" aimed to increasing aware of these tasks, identifying unrecorded tasks, and (hopefully) drumming up interest in these tasks. An initial list of tasks includes: - packaging for Quantum - developer documentation - keystone integration for QuantumManager Quantum communication - API auth model (including authenticating 'vif-plugging') - pylint improvements - improvements to API extensions (e.g., issues with data extensibility) - improving Dashboard / QuantumManager / Quantum flow (IPAM, project vs. keystone user mismatch) - cli tool + api client for listing vifs exposed by nova - improvements to Nova API for specifying vNICs - Quantum CLI extensibility - pep8 version standardization

Monday October 3, 2011 11:00am - 11:55am
Wheeler

11:00am

MySQL Alternatives
Mysql presents a unique challenge in the deployment of OpenStack. This brainstorming session will look at all of the current technologies and approaches that could be utilized to accomplish a more horizontally scalable solution. Examples include: Multizone cluster Mysql HA (Drizzle HA) SqlAlchemy integration with NOSQL databases.

Monday October 3, 2011 11:00am - 11:55am
Hutchinson

11:00am

Unconference
Monday October 3, 2011 11:00am - 12:25pm
3rd floor

11:30am

Keystone Domains
Currently Keystone lacks the ability to scope users and their resources to an uber collection. Keystone is in need of a higher order entity (aka domain) to allow for the grouping of users, groups, roles, and tenants. A domain can represent an individual, company, or operator owned space. The intent of domain is to define the administrative boundaries for management of Keystone entities. By defining the domain collection an authorization system, whether external or internal to Keystone, can be used to enforce policies related to admin operations for that domain. With domains in place, administrative roles within each domain can then be defined to control CRUD operations on entities scoped to the domain. Additionally, domains afford the ability to setup cross domain trust relationships which can then be used to controlling the ability to give users of one domain access to resources of another.

Monday October 3, 2011 11:30am - 11:55am
Salon A

12:00pm

Refactoring Glance's internal Python API
It has been discussed that Glance should have clearer internal APIs that more distinctly delineate between the Python internal API and the external HTTP APIs

Monday October 3, 2011 12:00pm - 12:25pm
Wheeler

12:00pm

Make VM state handling more robust
We've found a number of cases where the current handling of VM state makes some failure modes difficult to debug and confusing to the customers. For example a VM which takes a long time in building can be terminated, go to shutdown, and then jump back into life when the build completes. There are also a number of cases where the API allows calls that are inconsistent with the current state - which leads to non-deterministic behavior. We propose to better define and control the allowed state transitions, and introduce some specific failure states to make some failure mode more explicit.

Monday October 3, 2011 12:00pm - 12:25pm
Hutchinson

12:00pm

ring explanation
discover how the ring works

Monday October 3, 2011 12:00pm - 12:25pm
Salon A

12:30pm

Lunch

Introductionon how the Summit works.


Monday October 3, 2011 12:30pm - 1:30pm
3rd floor

1:30pm

Lightning Talks
Monday October 3, 2011 1:30pm - 1:55pm
3rd floor

2:00pm

Quantum: Integrating Advanced Network Services
The goal of this design session is to settle on a framework for how we expect to integrate advanced network services with Quantum. Note: this session is designed to be a pre-requisite for discussion on particular higher-level services (e.g., Firewalls/ACLs, L3, Load-balancing, etc.) Its goal is NOT to actually design any particular higher-level service. Those will be covered in subsequent sessions. Topics covered will include whether higher-level services should be a part of Quantum proper, and if so, how to support different pluggable backends implementing different services, how to handle conflicting designs for a single service (e.g., two participants want different APIs for WAN bridging), and other fun topics in the same vein.

Monday October 3, 2011 2:00pm - 2:25pm
Wheeler

2:00pm

Reddwarf (Database as a Service) Project update
Id like to show the group what work we have done so far with reddwarf (Database as a Service). This will chiefly be about where we are, lessons learned being forked from nova, and where we want to go with database as a service. There will also be a discussion on all things Reddwarf in a Brainstorm session. This is more of an informative session to catch the open up w/ what we are doing.

Monday October 3, 2011 2:00pm - 2:25pm
Salon A

2:00pm

Stable Release Updates
This session is mainly targeted towards distributions, integrators and those that deploy. Openstack moves at an incredible pace, and the target audience for this session usually takes a released snapshot and aims to support it for a given time. This often means that they need to cherry pick fixes back from trunk, or create their own custom patches. It would seem logical to pool this effort together, where the target audience for the session can talk about viability, procedure, policy and collaboration.

Monday October 3, 2011 2:00pm - 2:55pm
Hutchinson

2:00pm

Unconference
Monday October 3, 2011 2:00pm - 3:55pm
3rd floor

2:30pm

Network Services Insertion
It defines the way services will be inserted in the network and the necessary connections to have them up and running. There will be two types of services, symmetric and singles services. Symmetric services will require one server running at each side of the edge routers of the network, one for the client side and one for the server application one. Those services may have an understanding of its pair server configuration but it is not necessary. Example of these type of services are network application accelerator. Single services just require one server to enforce their functionality and they are mostly deployed at the application server side of the network. Example of these type of services are load balancers and firewalls among others. Regardless the type of service that will be inserted in the network, there are different models to wired a service into the network. These are the two existing models: A) In-Path (Bump in the Wire) In this model, the service is placed in the path of the traffic to the server VMs, by splitting the network into two, and having the service bridge between the two, in the process applying the service. This is achievable with the current set of Quantum APIs. HA in this model is achieved through an external monitoring entity that monitors the health of the service and kill/re-spins if it went down. B) Out-of-Path (Redirection) In this model, the service (single node or a cluster of nodes) is placed out of the normal traffic flow path, and the gateway redirects the candidate set of traffic to the service. After the service has been provided to the traffic, it is returned back to the gateway for forwarding to the end host. HA in this model is achieved through an external monitoring entity as in the in-path case. In addition, the VPN gateway could also monitor the state of the service and choose to make alternate redirection decisions.

Monday October 3, 2011 2:30pm - 2:55pm
Wheeler

2:30pm

auth integration explanation
discover the auth hooks present in swift and how to integrate with them

Monday October 3, 2011 2:30pm - 2:55pm
Salon A

3:00pm

HA capabilities for guest instances
During nova-compute downtime guest instances cannot be controlled; also instances may become unresponsive because of host/guest failures. This session is to explore what extensions could be added to nova-compute drivers (and related API) so that guest instance management is enriched with HA capabilities. Things like: - restarting HA-enabled VMs impacted by host failures. - reallocating VMs on a host coming back to life. and live migration based on predictive mode failures could be compelling features for certain types of workloads.

Monday October 3, 2011 3:00pm - 3:55pm
Salon A

3:00pm

Separating APIs from Implementation of the API
This session is a discussion session that discusses why the OpenStack community and governance boards should consider the API separately from the implementation of the API. The API and the implementation should be separated entirely, including having the PPB (or some other focused group) vote *separately* on the API of a layer separately from the implementation of that layer. This would make it much easier for competing implementations to be written and blessed" as OpenStack compliant. The Object Storage API ould then have non-Python implementations, such as what has been done by the folks at GlusterFS and is being worked on for two other storage systems/vendors. Likewise, if Rackspace IT or some other organization (just using Rackspace IT as an example since I recently spoke with them.. ;) ) have a Java implementation of the Images API, they could go ahead and use that without worrying that their implementation would break other parts of OpenStack. In addition, these folks could propose their implementation for inclusion into "core Openstack". That said, if you separate out the API from the implementation, the idea of "core OpenStack" might be able to change from "a set of mostly Python projects (because that's what things were implemented in...)" to "a set of projects that implement the core OpenStack APIs". Another way to think of it: We don't say that RabbitMQ is "core AMQP". We say that RabbitMQ "implements the AMQP standard". OpenStack should be similar: a standard set of APIs for cloud components, with a set of implementations of those APIs.

Monday October 3, 2011 3:00pm - 3:55pm
Hutchinson

3:00pm

CDN as a Service using Openstack Building Blocks
In this session we'll discuss the requirement for a CDN as a Service built on top of Openstack. We'll discuss the various CDN use cases, brainstorm about many facets of a CDN Service and evaluate the gaps. As an example we'll discuss and explore the new components needed to not only create a CDN Service for openstack SWIFT which the swift origin server (sos) is starting to implement but also allow users to CDN-enable static content static files like images, javascript files, CSS, etc which doesn't reside in SWIFT. We would like to discuss the implication of adding keystone into the CDN architecture and understand how to best handle the CDN provider requests for public content. Another important aspect of CDN as a Service will be the integration layer with the CDN providers.

Monday October 3, 2011 3:00pm - 3:55pm
Wheeler

4:00pm

Break

Introductionon how the Summit works.


Monday October 3, 2011 4:00pm - 4:30pm
3rd floor

4:30pm

replication explanation
discover how replication works

Monday October 3, 2011 4:30pm - 4:55pm
Salon A

4:30pm

NetStack integration with Nova and Dashboard
In this session all parts involved in Netstack will resume the discussion we already had at the diablo summit concerning how NetStack services (Quantum at least at the moment) should interact with other services from nova (mainly compute and network),as well as other services from the Openstack Ecosystem (mainly the Dashboard). In the Diablo release cycle, great progress has already been done: - VIF drivers - Quantum Network Manager for nova-network - Quantum plugin for Openstack Dashboard Nevertheless, we still need to address some rather important points as standardizing the way in which the network service interacts with the compute service (and or the hypervisor), and viceversa. The aim of the session is to find (or at least pave the way for) a solution for a better integration of NetStack services with the rest of Openstack, buiding on the work performed in the diablo timeframe, and keeping in mind that one of the principles behind NetStack was to keep it as much as possible loosely coupled with the rest of Openstack services.

Monday October 3, 2011 4:30pm - 5:25pm
Wheeler

4:30pm

OpenStack Compute API 2.0
What should the 2.0 version of the API look like? If we make the decision to remove backwards compatibility with API 1.1, what does that open up to us? Here are some concepts that have caused concern in 1.1; we can discuss those. More importantly, we can get a vision of what new features the 2.0 API should have. 1. Fully asynchronous model - are there parts of 1.1 we can clean up? 2. Progress indicators - user experience designers love'em: can they be made more realistic or should they be eliminated? 3. How can we best retrieve (near-) real-time information on errors and usage from a server? Right now, all that data is pushed out over a message queue. As a user, I want to see all the problems with a server at once, and not have to build a solution to store and manage those. 4. Fault analysis and healing: should the OS Compute API be able to recognize problems and suggest solutions? (high disk I/O, network bandwidth, etc.) What would the API for that look like? 5. Can we build elasticity into our cloud? How would we define what that looks like to the user?

Monday October 3, 2011 4:30pm - 5:25pm
Hutchinson

4:30pm

Unconference
Monday October 3, 2011 4:30pm - 5:55pm
3rd floor

5:00pm

Atlas-lb (Loadbalancing as a Service)
We will provide an update of the work we have done so far, and have discussions started around the project. This will be mainly about API 1.1 Spec we just proposed, how we plan to implement the core, extensions and adapters for the service and challenges associated with that. http://wiki.openstack.org/Atlas-LB

Monday October 3, 2011 5:00pm - 5:25pm
Salon A

5:30pm

NetStack Continuous Integration Planning
The idea is to get all the relevant NetStack parties in one room with a whiteboard and plan out how CI will be done and what the process will be to get everything up and running. By the time the summit begins, we hope to have all the NetStack Jenkins Infrastructure in place, this is more to plan what needs to be done with it.

Monday October 3, 2011 5:30pm - 5:55pm
Wheeler

5:30pm

Secure control path for Nova-volume attach
We believe that the method used in Nova-Diablo to attach volumes has a security vulnerability that could enable a rogue VM access to other users' volumes. In Diablo: The API request is passed directly from the API node to the Nova Compute node where it calls the driver specific method to attach the volume. While the API request is authenticated at the API node, this authentication is not enforced at the Compute node. It is theoretically possible for a rogue VM to take control of its host. In such a scenario it could potentially attach, and read, erase etc, any user's volume. In the case where storage is via a SAN type solution, and the nova-volume service is not running on the same host as nova-compute, we would like add a path to the execution of the request that first calls a driver method on the Nova Volume node that can be used to enable access to the volume from the destination Compute node. In the case where this method has not been called for a specific volume/compute host combination, the SAN will be able to block access requests to other volumes, narrowing the amount of accessible data considerably. In the case of existing volume drivers, this will be a no-op and no code changes are required of them.

Monday October 3, 2011 5:30pm - 5:55pm
Salon A

5:30pm

Serialization and the WSGI Layer
I'd like to discuss plans to refactor the (mostly common) code shared between Nova, Keystone and Glance that handles serialization of request data, deserialization of response data, and the general architecture of the WSGI pipeline. Brian Waldon did some excellent work in Glance to refactor the tight coupling of the Controller with HTTP request and response handling. I'd like to explore ways that this work can be continued and eventually make its way into the openstack-skeleton project. The initial refactoring of the WSGI pipeline in Glance added a wsgi.Resource class that contains a serializer, deserializer and controller. However, the Controller objects (in both Nova and Glance) continue to raise HTTP errors directly through webob.exc.HTTPX objects. One initial refactoring I'd like to see is the complete decoupling of the controller from the protocol that the server is responding on. Eventually, we want Glance to be able to respond on non-HTTP protocols (such as AMQP), and having an entirely decoupled controller allows greater code reuse.

Monday October 3, 2011 5:30pm - 5:55pm
Hutchinson

6:00pm

Beer Garden, sponsored by HP

Beer Garden event at McGreevy's, hosted by HP
Shuttle buses will pick up guests from lobby at 6pm.
Will include free food and beverage.


Monday October 3, 2011 6:00pm - 9:00pm
3rd floor
 
Tuesday, October 4
 

9:00am

Extensions in Glance - What, How, and Why
This session will be for brainstorming about adding extensions to Glance. We need to cover: * What exactly *is* an extension? Is an extension a truly optional component that can be switched on or off at-will? Must an extension always add to the Images API? Or can an extension change existing behaviour in a documented way? Or can an extension exist that does not modify the API in any way at all (think: transparent caching extension)? * Use cases for extensions * Possible implementation ideas for extensions * Where to put extensions? Where to document them?

Tuesday October 4, 2011 9:00am - 9:25am
Wheeler

9:00am

Authenticating by 'scope,' rather than by 'tenant'
Authenticating in Keystone currently centers around producing "unscoped" and "scoped" Tokens; "scoped" Tokens being those assigned to a specific Tenant. To increase the flexibility of Openstack's Identity-API, clients could instead authenticate for a "named Scope," which could include zero-to-many Roles, Tenants, Endpoints, etc. The proposed API should be generic enough to allow a specific implementation or configuration to impose arbitrary constraints on the environment (e.g. "one user per tenant") if desired.

Tuesday October 4, 2011 9:00am - 9:25am
Salon A

9:00am

Host aggregates in OSAPI admin extension
In this session we would like to discuss the introduction of the concept of aggregate. An aggregate is a collection of compute hosts that have access to shared capabilities (e.g. Storage, network, etc.) and allow things like hosts' zero-downtime upgrades (for example, via live migration of instances from one member of the aggregate to another), thus causing no disruption to guest instances. The concept of hierarchy of aggregates could also be explored for scalability reasons. We propose to expose such a concept to the admin of the cloud, and at the same time to keep this transparent to the end-user. In fact, the OSAPI already provide some capabilities for host management via admin extensions, but this is very crude at the moment. Further extending the concept means that putting a host into 'maintenance mode' implies that all the instances running on it can migrate onto an available host in the aggregate, thus causing no disruptions to guest instances. Bringing such capability in the API (and consequently in the orchestration infrastructure) also means that the admin can intervene on the plaform, and at the same time be agnostic of the kind of hypervisor that the host is running, which is very beneficial.

Tuesday October 4, 2011 9:00am - 9:25am
Hutchinson

9:00am

Unconference
Tuesday October 4, 2011 9:00am - 10:25am
3rd floor

9:30am

VM disk management in Nova
From discussions on the ML, there are some differences in how disk management is currently implemented in Nova. This session would discuss the differences between the virt layers and attempt to come to a consensus for a single way to manage disks. Current discussions have been around fixed disks, with optional expandable secondary disks or single disk that is expanded. I would like to discuss these options as well as others.

Tuesday October 4, 2011 9:30am - 9:55am
Hutchinson

9:30am

OpenStack Documentation Strategies
OpenStack documentation revolves around docs for Python developers, developers who use the OpenStack APIs, system administrators, and cloud administrators. The Doc Team, led by Anne Gentle, wants to discuss the docs move to GitHub, the gaps in the current documentation, and the resulting doc bug list from the 9/19/11 Doc Blitz Test.

Tuesday October 4, 2011 9:30am - 9:55am
Salon A

9:30am

Donabe API/models
This session is for discussions about the API and container models for Donabe

Tuesday October 4, 2011 9:30am - 10:25am
Wheeler

10:00am

Inventory of cloud resources
The goal is to be able to retrieve information not only about running/scheduled instances & volumes, but about totally available & used H/W resources, such as amount of CPU/memory/disk space+disk types/networking resources/etc. Some of this information reported to schedulers. We can try to retrieve it from them (though it might be tricky in multi-schedulers per zone environments) or register a new service who will collect it from nodes. This session will focus on Nova, but it might be relevant for other services as well.

Tuesday October 4, 2011 10:00am - 10:25am
Salon A

10:00am

Integration Test Suites and Gating Trunk
There has been a proliferation of custom integrated test suites and frameworks over the last year. Some of these include: * Stacktester (https://github.com/rackspace-titan/stacktester/) * Kong (https://github.com/cloudbuilders/kong) * Backfire (https://github.com/ohthree/backfire) * Proboscis (https://github.com/rackspace/python-proboscis) An effort to consolidate some of these efforts into a unified integrated test suite for OpenStack has been started and is available at https://github.com/sorenh/openstack-integration-tests. Some discussion about the effort occurred on the mailing list: https://lists.launchpad.net/openstack/msg04014.html. This session is to discuss the new, singular place to house integration tests of OpenStack and bring everyone's ideas to the table, assign some action items to interested parties, and make sure all interested parties have a clear roadmap for the integration test suite. Lets talk about a possible plan to gate commits so that things don't break quite so often. Things to consider: -Cross service? (keystone, glance, nova) -Packages or source installs? -Config management? (chef, puppet) -How to maintain? If we use packages then which packages control commits into trunk (Ubuntu, Debian, both?) -Which configurations are we gating against (libvirt, XenServer, MySQL, Postgres, etc.)

Tuesday October 4, 2011 10:00am - 10:25am
Hutchinson

10:30am

Break

Introductionon how the Summit works.


Tuesday October 4, 2011 10:30am - 11:00am
3rd floor

11:00am

Images 2.0 API - Mover and Registry separation
Brainstorm session to flesh out a 2.0 Images API. There are a number of deficiencies in the current 1.1 Images API. Currently, the Glance API node communicates with the Glance registry and returns image metadata as HTTP headers in calls to GET /images/ and HEAD /images/. We want to split the API requests for metadata from the API requests for image data for the following reasons: 1) Unnecessary communication with registry when not requested 2) Ability to cache static image data files vs. non-static image metadata 3) Ability to have larger and more complex structured image metadata that would neither fit in HTTP headers or be easy to parse It has been proposed that the 2.0 Images API split image file and metadata into clearly delineated resource endpoints that the Glance client class would be able to navigate.

Tuesday October 4, 2011 11:00am - 11:55am
Wheeler

11:00am

Dashboard, Nova, Keystone and RBAC
Discuss role base access controls and our strategy for defining and implementing this with Keystone in a standardized way.

Tuesday October 4, 2011 11:00am - 11:55am
Hutchinson

11:00am

A PaaS Layer for OpenStack (Neutronium)
This brainstorming session will be an open discussion of how best to add a PaaS layer to the existing OpenStack IaaS layer. HP Cloud Services has some preliminary ideas we would like to share, but the goal of this session is to open a wider community discussion. HP Cloud Services current ideas are in the Neutronium blueprint. Neutronium is a platform services framework accessed via a restful api with well-defined interfaces that permit easy integration of a wide range of platform services and tools.

Tuesday October 4, 2011 11:00am - 11:55am
Salon A

11:00am

Unconference
Tuesday October 4, 2011 11:00am - 12:25pm
3rd floor

12:00pm

RBAC for Quantum
Currently Quantum has a very basic authorization model: all the users belonging to a tenant have the same right, and only users with "administrative" roles are allowed to plug interfaces. This is rather limiting and should be improved in several ways: 1) Having a full RBAC model, thus allowing tenants to specify distinct roles for their users. E.g.: "standard users" vs. "network administrators" 2) Allowing Quantum to communicate with the "interface service" (nova), in order to fetch information concerning VIF ownership 3) Allowing Quantum to manage private (ie: per tenant) networks alongisde public/community networks (ie: networks where each tenant or specific groups of tenants can plug their interfaces) This proposed session is somehow related to: http://summit.openstack.org/sessions/view/47 (Dashboard, Nova, Keystone and RBAC, Openstack core track)

Tuesday October 4, 2011 12:00pm - 12:25pm
Wheeler

12:00pm

OpenStack Common
OpenStack is comprised currently of three official projects, one incubator project, and a number of satellite projects: Compute/Nova (Official) Image/Glance (Official) Object/Swift (Official) Identity/Keystone (Official as of Essex) Network/ (Satellite) Queue/Burrow (Satellite) Volume/Lunar (Satellite) Database/Red Dwarf (Satellite) ...and more! Many of these projects share a large amount of code which has been copied, pasted, and modified to fit new scenarios. I would like to propose: 1) The creation of `openstack.common` Python module to be hosted on GitHub. 2) Official guidelines for including code into `openstack.common` including a long list of DOs and DONTs. 3) Timeline and steps for moving forward, as well as a discussion as to potentially pitfalls.

Tuesday October 4, 2011 12:00pm - 12:25pm
Hutchinson

12:30pm

Lunch

Introductionon how the Summit works.


Tuesday October 4, 2011 12:30pm - 1:30pm
3rd floor

1:00pm

PPB meeting

The Project Policy Board will meet in public during the Essex Design Summit !


Tuesday October 4, 2011 1:00pm - 1:55pm
Salon A

1:30pm

Lightning Talks
Tuesday October 4, 2011 1:30pm - 1:55pm
3rd floor

2:00pm

Dashboard Essex roadmap
We will use this time to discuss upcoming features for Essex and have an open forum for discussing new features.

Tuesday October 4, 2011 2:00pm - 2:55pm
Salon A

2:00pm

Advanced Scheduling
Since the Diablo Summit, there have been a number of advances with Zones and Distributed Scheduler (particularly with the Least Cost scheduler) Additionally, there are still a lot of groups that have specific needs of the scheduler, such as: 1. the use of different metrics for making scheduling decisions, such as InstanceType, compute host ports/interfaces, etc 2. scheduling for networks and volumes 3. different cost/weighing calculations. This session is intended to talk about the requirements of these other groups and try to unify the solutions. We only have 55 minutes for this session, so let's try to get all the issues outlined on this wiki page first: http://wiki.openstack.org/EssexSchedulerSession

Tuesday October 4, 2011 2:00pm - 2:55pm
Hutchinson

2:00pm

geographic replication and tiered zones
how can we have one logical swift cluster span a wide geographic area? this will include a discussion on tiered zones in the ring

Tuesday October 4, 2011 2:00pm - 2:55pm
Wheeler

2:00pm

Unconference
Tuesday October 4, 2011 2:00pm - 3:55pm
3rd floor

3:00pm

Glance API 2.0 - Image Properties
There's been a bunch of admittedly hacky code placed into the Glance API and registry servers for dealing with custom image properties. Some of the problems that are still unsolved and/or solved using pretty ugly hacks are: * The API does not allow manipulation of individual image properties * The image property is a simple key-value pair -- there is a need to have image properties have more attributes than just a key and a value, e.g. a type, owner, visibility flag, etc This session is to discuss a proposed new image properties subresource endpoint for the OpenStack Images API 2.0

Tuesday October 4, 2011 3:00pm - 3:25pm
Wheeler

3:00pm

Private/public cloud integration (bursting)
In a perfect world, private clouds would be able to automatically expand to take advantage of public clouds (aka bursting). This raises issues of authentication, authorization, and federated management of those roles and responsibilities. Can we come up with a model for how to handle this in Essex? I know that a lot of people have been thinking about this, but there does not appear to be a solid consensus at the moment.

Tuesday October 4, 2011 3:00pm - 3:25pm
Hutchinson

3:00pm

adding/removing/replacing nodes
discover best practices for operating a swift cluster

Tuesday October 4, 2011 3:00pm - 3:25pm
Salon A

3:30pm

Server Templates
A server template is a compute image plus some additional metadata (and perhaps meta-metadata) used to drive the configuration of the applications installed in the image. A server template could be used, for example, to build a server containing a pre-installed WordPress system and database, or it could be used to build a network appliance such as a VPN server or firewall. This session is to brainstorm and capture ideas for what a server template might look like, how it could be stored, and what the preliminary requirements are.

Tuesday October 4, 2011 3:30pm - 3:55pm
Wheeler

3:30pm

Quantum: Network Services Parity with nova-network
Goal is to discuss how to integrate various features of the existing nova-network service other than L2 forwarding into the Quantum model. These include: DHCP, floating IPs, VPN, NAT, L3 gateway, the metadata server, as well as IPAM (handled by Melange) The goal here will be to design the workflow + toolset when using Quantum with these higher level capabilities. If using Quantum + Melange + other network services is too complicated, folks won't do it. Note: the determination here may be that we slice certain functionality off into a dedicated service, implement it as a sub-part of Quantum, or keep it in nova. If we decide to re-implement the functionality either as a sub-service of Quantum or as a separate service, we would have a separate discussion to actually design the new service or sub-service.

Tuesday October 4, 2011 3:30pm - 3:55pm
Salon A

3:30pm

Waiting for Keystone - Hacking AuthZ short-term
While the ultimate goal is using Keystone for AuthZ in Nova we may need an interim solution until Keystone is fully ready. This session will talk, at a code level, what might be possible to do in Nova until Keystone is ready. How will authorization checks get performed? Decorators? Explicit if-statements? When will authorization checks get performed? In API? In Server.API? in the service itself? How will we configure this interim solution without writing a whole lot of code that will be thrown away later? .conf? db? json? .py? Will we require all Nova resources to belong to Resource Groups in order to keep the AuthZ-service calls to a minimum? Will this work across Zones? Will this work in Federated environments? How will this interim solution get replaced when Keystone is ready?

Tuesday October 4, 2011 3:30pm - 3:55pm
Hutchinson

4:00pm

Break

Introductionon how the Summit works.


Tuesday October 4, 2011 4:00pm - 4:30pm
3rd floor

4:30pm

Netstack L3 Service
Today's first Netstack service Quantum provides a logical abstraction and related APIs for L2 Networks. In Netstack L3 service session, lets discuss about Netstack service which could provide a L3 layer logical abstractions such as L3 gateway, Subnets, Route rules and so on. Proposed discussion topics are : L3 logical abstractions - Why & how Instantiation models Quantum & "L3 Service" integration points Use cases Prioritize on what we need for Essex timeframe.

Tuesday October 4, 2011 4:30pm - 5:25pm
Wheeler

4:30pm

Integration and future of Novaclient
Novaclient currently lives on github, but it is a core part of Nova as it is required for Zones. Do we think it's time to bring it in from the cold? Issues: 1. a client that tracks the bleeding edge of dev 2. importance of an externally maintained client tool 3. linkage ... how to integrate That said, the overall design of novaclient to nova is a very laborious process. There are other design possibilities that we'd like to talk about as well. So, our secondary agenda is: Explore a new implementation of Novaclient: - rebrand as OpenStack Compute Client and lose Nova-specific functionality? - should we base it on an IDL or continue with the existing approach? - what limitations will there be if we choose to convert to an IDL-based implementation?

Tuesday October 4, 2011 4:30pm - 5:25pm
Hutchinson

4:30pm

OpenStack Faithful Implementation Test (FITs)
The proposed FITs standards will govern what products and services will need to do in order to describe themselves as "Built on OpenStack" or "Powered by OpenStack". This session will review the current state of FITs proposals, and dive into the arguments and counter arguments around certification of APIs, quality, performance, and functional coverage. See http://openstack.org/brand/openstack-trademark-policy/ for background.

Tuesday October 4, 2011 4:30pm - 5:25pm
Salon A

4:30pm

Unconference
Tuesday October 4, 2011 4:30pm - 5:55pm
3rd floor

5:30pm

Hybrid Cloud Service
As we are evolving towards having set of services, commonly called,Netstack for Openstack, to provide APIs for all the network related services and resources consumption,one of the compelling "Service" could be, to connect Tenant's DC with Tenant's Cloud resources seamlessly. Proposed Discussion agenda for this Hybrid Cloud Service session are 1. Discuss the Hybrid Cloud Service - VPN GW service - Why & How 2. Usecases & Deployment discussions 3. Abstraction model and APIs, if time permits 4. Prioritize on work items for Essex release

Tuesday October 4, 2011 5:30pm - 5:55pm
Wheeler

5:30pm

Nova Image cache management
Nova loads images on demand from Glance, which may then be shared by multiple VMs via copy on write layers. This resulting image cache on each compute server is currently unmanaged, and performing any audit / clean-up operations on it over a lagre number of compute servers is a non trivial opertaional procedure. This session would explore the concept of a cache manager to identify the list of images and thier current state with respect to thier usage by VMs and if they still exist on glance. Based on this state of cached images the cache manager would be able to take a number of configurable actions, such as: - Deleteing images in the cache that are no longer used (or have not been used for some period) by VMs - Deleteing images in the cache that are no longer uesd by VMs and are no longer present in Glance - Preloading specific images, for example by matching on specific Glance Metadata - By refering to meta-data associated with the images track the usage of licenced Operating Systems Preloading will need to need to implement a mechanism to ensure that compute servers do not all request the same image from glance concurrently. Once image state data is available it could be used by an scheduler enhancment to place VMs on servers which already have a cached copy of the required image

Tuesday October 4, 2011 5:30pm - 5:55pm
Hutchinson

5:30pm

Introducing Cloud Audit
CloudAudit is an open standard developed by a Cloud Security Alliance working group that provides an open, common, extensible namespace and interface to enable cloud computing providers and their authorized customers to automate audits, assertions, assessments and assurance for their cloud infrastructure, platform or application environments. Piston Cloud Computing, a regular contributor to OpenStack, is developing an implementation of the CloudAudit API for inclusion into the Essex release of OpenStack. In this session, Chris will discuss the CloudAudit API, share the new implementation, and introduce the implementation of the NIST 800-53 glossary and control items.

Tuesday October 4, 2011 5:30pm - 5:55pm
Salon A

6:00pm

Harbor boat tour, hosted by CloudScaling

Harbor boat tour
We will walk to this location as a group. Meet in lobby at 6pm. Boat departs at 6:30pm sharp, comes back at 9:30pm. Will include free food and beverage.


Tuesday October 4, 2011 6:00pm - 9:30pm
3rd floor
 
Wednesday, October 5
 

9:00am

Dashboard interaction and visual design for Essex
We will be presenting and discussing visual design, user interaction, and user experience for the Essex timeframe.

Wednesday October 5, 2011 9:00am - 9:25am
Salon A

9:00am

Cobbler Integration
Integrate cobbler into nova in order to boot bare metal machines.

Wednesday October 5, 2011 9:00am - 9:25am
Hutchinson

9:00am

Swift Object Versoning
Discuss adding object versioning to Swift

Wednesday October 5, 2011 9:00am - 9:25am
Wheeler

9:00am

Unconference
Wednesday October 5, 2011 9:00am - 10:25am
3rd floor

9:30am

XenServer KVM feature parity plan
Right now there are quite a few feature gaps between KVM and XenServer. The goal of this session is to document all known gaps, decide on an action plan for getting them resolved, create bugs/blueprints for all of them, and (hopefully) assign them.

Wednesday October 5, 2011 9:30am - 10:25am
Hutchinson

9:30am

OpenStack - Satellite
Openstack - Satellite would be an area for projects that are not in the core of OpenStack (i.e., Nova, Swift, Glance, etc). It would be a "satellite" area for projects, deployment utilities, code snippets, and configuration files. The satellite would offer support for core components and act as a common repository that would be easily searchable as the satellite projects will be centralized in one area. This could in turn cut down on the possible duplication of effort. Another benefit would be the language independence of projects in the satellite area. The OpenStack - Satellite would be language independent in order to maximize the breadth of contributors. Openstack - Satellite could open a whole new dimension in contributions to the community from Rackspace and others. Openstack - Satellite is not a place for core OpenStack development or extensions of core OpenStack development. Some examples could be: 1) Secure CDN 2) Logging 3) Usage Services 4) Billing Services 5) Incident Services 6) Crowbar

Wednesday October 5, 2011 9:30am - 10:25am
Salon A

9:30am

dev/ops tools
what tools are needed for running a swift cluster?

Wednesday October 5, 2011 9:30am - 10:25am
Wheeler

10:30am

Break

Introductionon how the Summit works.


Wednesday October 5, 2011 10:30am - 11:00am
3rd floor

11:00am

Glance Throughput Improvement
The Glance API server currently starts a wsgi server on a green thread per request. This appears to limit the processing capability of the Glance API server to a single CPU. Under heavy load this can make Glance appear un-responsive. This is especially noticeable when using SSL, where a single thread can use 100% CPU for encryption and compression processing. We propose to add multi-process support by starting up a configurable number of processes each listening on the Glance server port.

Wednesday October 5, 2011 11:00am - 11:25am
Wheeler

11:00am

Nova: Support for PCI Passthrough & SR-IOV
Discussion about adding support for PCI-passthrough & SR-IOV enabled devices - Inventory management (manual vs auto-discovery of available resources) - Capability reporting and Scheduling - Implementation on different platforms: Libvirt, Xen, HyperV, VMware - Live & Offline Migration & possible other aspects Related BP: https://blueprints.launchpad.net/nova/+spec/pci-passthrough Merge proposal (first cut): https://review.openstack.org/#change,776

Wednesday October 5, 2011 11:00am - 11:25am
Hutchinson

11:00am

Security APIs for the Cloud
This talk will describe a set of security APIs that have been developed for the cloud, which provide the following features i) federated SSO (including using Facebook, Google, SAML accounts etc. to log in) ii) attribute based access controls, for granting access to your cloud resources to other cloud users iii) delegating access to anyone from anywhere to your cloud resources

Wednesday October 5, 2011 11:00am - 11:55am
Salon A

11:00am

Unconference
Wednesday October 5, 2011 11:00am - 12:25pm
3rd floor

11:30am

Nova Upgrades
Discussion of possible changes to Nova to support transparent updates. There are a number of ways to go about upgrading Nova with service impacts that range from no impact to a complete outage. Making upgrades as transparent as possible will involve carefully controlling the dependencies between components and the order in which various components are upgraded. This session will be a discussion of changes to Nova that will minimize the impact of future upgrades on users.

Wednesday October 5, 2011 11:30am - 11:55am
Hutchinson

11:30am

Automatic large container shard
Single-container performance for object writes degrades as a container gets bigger. One possible solution is to split containers when they get large, hiding the resulting container partitions from the user.

Wednesday October 5, 2011 11:30am - 11:55am
Wheeler

12:00pm

User CRUD operations (SCIM)
Using Simple Cloud ID management standards for core user CRUD operations.

Wednesday October 5, 2011 12:00pm - 12:25pm
Salon A

12:00pm

EC2 Feature Review
Review EC2 features that is currently missing from nova.

Wednesday October 5, 2011 12:00pm - 12:25pm
Hutchinson

12:00pm

reduce resource usage (fs layout and replication)
it would be better if object replication used less resources it would be better if we used less inodes in the system

Wednesday October 5, 2011 12:00pm - 12:25pm
Wheeler

12:30pm

Lunch

Introductionon how the Summit works.


Wednesday October 5, 2011 12:30pm - 1:30pm
3rd floor

1:30pm

Lightning Talks
Wednesday October 5, 2011 1:30pm - 1:55pm
3rd floor

2:00pm

Quantum 'Reference Implementation' plugin
The aim of this session is to discussion the design and the implementation of a plugin aimed at becoming a porting of the layer-2 part of nova's VLAN network manager to Quantum. In other words, the plugin proposed with this blueprint will be functionally equivalement to the OVS plugin (https://blueprints.launchpad.net/quantum/+spec/quantum-openvswitch-plugin), but it will enable Quantum to be used over a wide range of hypervisors, including ESX and possibly hyper-V.

Wednesday October 5, 2011 2:00pm - 2:25pm
Salon A

2:00pm

The future of nova-volume
This is a catch-all session to discuss where nova-volume needs to go to get it up to snuff. Some important pieces: * integration of cleanup code for separation from compute * special schedulers based on type * zone support * separate api endpoint?

Wednesday October 5, 2011 2:00pm - 2:55pm
Hutchinson

2:00pm

Deployment Fixtures (multi-node system deployment)
An API spec to describe the full deployment environment(s) for OpenStack components. We need a way for each OpenStack project to describe its deployment infrastructure in a consistent way across projects (REST API, configuration file, etc). The objective is to create a pattern that can be used to automatically create repeatable deployments. The pattern should be independent of the actual implementation mechanism, but should clearly indicate when changes require implementation changes. Being able to deploy components in a consistent way is essential for adoption.

Wednesday October 5, 2011 2:00pm - 2:55pm
Wheeler

2:00pm

Unconference
Wednesday October 5, 2011 2:00pm - 3:55pm
3rd floor

2:30pm

Git / Gerrit best practices
All of the OpenStack projects recently moved from bzr and launchpad merge proposals to git and gerrit. There are a bunch of really cool things this allows you to do, as well as things that are in the developmeent pipeline. Let's get everyone up to speed on the new development process toolchain.

Wednesday October 5, 2011 2:30pm - 2:55pm
Salon A

3:00pm

Service identity and Flexible Keystone Roles
Right now keystone has support for just two roles within keystone. Keystone Admin and Keystone Service Admin. What operations a role could do is defined in the code. Build a layer that would allow us to map operations and keystone roles. This way we could provide a flexible way of tying roles and operations that a role is allowed to do within keystone. Users of keystone could then add their own roles specific to keystone and also dictate what a role could do. Right now keystone only has a concept of user who could talk to keystone.We also have a role called service admin role, which a user could get and do bunch of operations posing as a service that wants to communicate to keystone. My proposal is to support individual applications to talk to keystone by providing an service id exclusive to that service and a service credentials (Could be a Token). This way we also make a clear difference between what a user is and what a service is. Every service that wants to talk to keystone could register itself and get an application id and credentials. => Some one like a keystone admin does it. Every service then could be allowed to do a bunch of operations based on the nature of the service. => Ex validating token, CRUD on roles, endpoint templates specific to a service Benefits are - Tracking => Eventually we might have needs to build tracking on what calls keystone gets.This would allow us to differentiate between services and users making calls. - Limiting => We could set limits on no of calls a service could make - Separation of concerns => we make a clear separation between users and services

Wednesday October 5, 2011 3:00pm - 3:25pm
Salon A

3:00pm

Cluster as a Service: Dodai
Managing multiple clusters for openstack clouds, Hadoop, and other diverse frameworks. Clusters of commodity servers are used for a variety of distributed applications like simulation, data analysis, web services, and so on. No single framework can fit every distributed application. Users can get clusters just specifying the configuration of them.

Wednesday October 5, 2011 3:00pm - 3:25pm
Wheeler

3:00pm

Notification and error handling (orchestration)
Nova currently ships error messages and notifications outside the system via a message queue. This has the downside that those errors are not available for diagnostics, and storing those errors breaks the simple model of Nova's architecture. The 1.1 API specification uses asynchronous notifications to access this data, but it is at best a mediocre solution. This session would be to brainstorm on various proposals for the 2.0 API and understand the best way to ensure that Nova users can track errors.

Wednesday October 5, 2011 3:00pm - 3:55pm
Hutchinson

3:30pm

Service Endpoint Template Location
Today, Keystone distinguishes service endpoints by Service Name/Type and Region. We would like to propose a change to the endpoint schema to allow further delineation by introducing a new complex type 'location' and attribute 'locationType'. The complex type describes the location of the endpoint and allows clients to choose services based on geography, region, and zone. This can be extremely useful when privacy laws might dictate which geography, thus service endpoint you use when storing certain types of information. Additionally, allowing a client to distinguish to the zone level can be beneficial in supporting client controlled HA within a region. Location type would be a required attribute based on an enumeration that clients could use to help them understand how, and whether, to consume the optional elements of location. For example, if the location type is 'enterprise' then clients probably do not care about zone, region, and geography. However, if the location type where cloud, then a client would want to consume the location information to help draw further distinctions on service endpoint. Concepts: - Zone: (or availability zone) a self-documenting name that represents a physically distinct install location of OpenStack services within a single region. This might be separate buildings on a campus, separate rooms in a data center, etc - Region: A collection of zones - Geo: A collection of regions Example: Zone = z1 Region = West Geography = US

Wednesday October 5, 2011 3:30pm - 3:55pm
Salon A

3:30pm

Quality Assurance in OpenStack
This is a session to have the members of the OpenStack QA team meet each other and discuss the main priority items that will be tackled in the Essex release series. http://launchpad.net/openstack-qa http://launchpad.net/~openstack-qa-team

Wednesday October 5, 2011 3:30pm - 3:55pm
Wheeler

4:00pm

Break

Introductionon how the Summit works.


Wednesday October 5, 2011 4:00pm - 4:30pm
3rd floor

4:30pm

IPv6 call to arms
We (Cybera) have tried to deploy OpenStack on IPv6 with mixed results, see http://www.cybera.ca/tech-radar/using-openstack-with-ipv6. IPv6 support in OpenStack could use improvements in the code and documentation. At a minimum, the purpose of this session would be to discover the landscape of IPv6 usage in the OpenStack community. Who is running OpenStack on IPv6? How are they using it? What challenges have they faced? Ideally we would also brainstorm the way forward for IPv6 support. Identify the areas that need the most work. Discuss potential collaborations with people willing to contribute time to improving IPv6 code and documentation. Started an etherpad at http://etherpad.openstack.org/nBJazPhggD and already filled in some info to get the ball rolling when the session starts.

Wednesday October 5, 2011 4:30pm - 4:55pm
Hutchinson

4:30pm

Integrating Xen and OpenStack with Project Kronos
Xen.org is porting the Xen Cloud Platform toolstack (XAPI) to Debian and Ubuntu. This will make it so that the Xen hypervisor, management domain, XAPI, and Nova (and other OpenStack components) can be installed on the same host system. We'd love raise awareness of this effort and to get some feedback on this from the OpenStack development community. For more details see: http://blog.xen.org/index.php/2011/07/22/project-kronos/

Wednesday October 5, 2011 4:30pm - 4:55pm
Wheeler

4:30pm

searchable API using keyword to get objects
* Content : basically, swift has put/get operation for object. we use put/get operation to store & retrieve an object. but,there is no way to find an object what i want to search matched with keyword. In swift, to get an object we have to send an information of an object (ex:account/container/object info). but, if we change put operation for an object with keyword like below. - 1.store an object - 2.store a keyword an acc/con/obj info matched with the object to repository (database or anythingelse) when we search an object we will use keyword which matched that object. we can retrive object from repository using keyword and get response object position where it is. after that we send the request for object with object position to make more smarter swift storage. what about add like this function. i think this is the very weak point in perspective of key search area in swift. if we have a search process with swift then swift will be more smarter and intelligent than before * Contruction - proxy layer - storage layer - pluggable meta store function - search indexing layer - search engine layer - Open Api Layer to deliver the object which is stored in swift

Wednesday October 5, 2011 4:30pm - 4:55pm
Salon A

4:30pm

Unconference
Wednesday October 5, 2011 4:30pm - 5:55pm
3rd floor

5:00pm

Increase Security (incl. new secure root wrapper)
In this session we'll discuss the potential threats to Nova internal security and their mitigation, including: * Information disclosure Log files and configuration files contain clear text passwords. * Root escalation In Diablo the direct use of sudo was abstracted to allow to plug another root escalation wrapper. That leaves room for implementing a more secure root wrapper that would filter arguments more precisely.

Wednesday October 5, 2011 5:00pm - 5:25pm
Hutchinson

5:00pm

Internal Service Communication
Discuss how services should use message queues internally to efficiently achieve their goals. Message formats, parallelization, documentation, and task granularity will be the main tenants of our present. Nova will be used as a case study.

Wednesday October 5, 2011 5:00pm - 5:25pm
Salon A

5:30pm

Discover Diablo Networking Modes
Quantum and Melange will be here soon, but until then, we have to get by with the existing networking modes. We'll discuss the plethora of networking options, how they can be hacked and tweaked, and how they are implemented. We might even discuss some other potential features that could be added.

Wednesday October 5, 2011 5:30pm - 5:55pm
Hutchinson
 
Monday, April 16
 

9:30am

This is a test
yay test

Monday April 16, 2012 9:30am - 9:55am
Seacliff AB