Showing posts with label Cloud. Show all posts

An Open Letter to the OpenStack Foundation

I have recently started to regularly follow the mailing lists and the conversations are quite interesting.

It is quite evident that that OpenStack is starting to go through growing pains. It was quite evident as well from OpenStack Silicon Valley 2014 that was held yesterday.

OpenStack has grown from a minimal amount – where most of the developers knew each other personally, knew each others phone numbers, a good personal community – the way it should be. But as all good things, like all successful thing, it grows. This is what OpenStack looks like today.

Current OpenStack programs are listed below:

New capabilities under development for Juno and beyond:

These are the core components. I do not think that I am exaggerating if I was to say that there are at least another 30 projects or Github repositories either on the OpenStack repo or the Stackforge repo, and a good portion of them are actively being developed.

Ideally each and every project should be independent from the other. This of course has its upside but also problems as well.

Let’s take an example.

Most of the core components use a central database, typically MySQL. Keystone, Neutron, Glance and so on. You could install a database server for each and every component – but that would probably be major overkill, so usually what will happen is you create different tables on the same database server for each component. This now becomes a shared dependency for all the components now using that Database server. The larger the number of components use it, the more things that need to be considered. It has to be made highly available, the MySQL queries of a single component should not have any major effect on any other component, and so on and so forth.

Dependecy Hell? Spaghetti

So you would think that if most components have decided on using the same database then we should do the same for all of the components? That would be a logical assumption. No need to over complicate things.

Along comes Ceilometer. I will not go into the reasons why, but it uses MongoDB, not MySQL. That just created a complication. I now need to know how to manage another database, back it up, replicate it, and make it highly available. In addition to MySQL.

I have seen this many times, my organization is as guilty as any other for doing things like this. A development group starts working on a product and look for what is the easiest way for them to deliver their solution. Sometimes a solution is chosen because it is a “hot technology” and other times, because the standard “just doesn’t cut it”.

(As a side note – a whole other discussion could be held here, is adhering to standards the right way – even if it is more difficult or takes longer, or get the job done in the best and fastest way possible – ideally it should be a balance of the two – IMHO)

Without clear direction from the product leadership, without a proper standard – things can go wrong very quickly.

That is how I came across a product (with about 30 components) that had 6 different database technologies:

  • Cassandra
  • Oracle
  • CouchDB
  • MySQL
  • MongoDB
  • Derby

That is

  1. 6 different connection strings
  2. 6 different database clients
  3. 6 different backup procedures
  4. 6 different High availability models

And so on and so on…

Is this a bad thing? Depends who you would ask. The people who developed it – they are happy – because they got the job done, product works and they did not create a dependency.

For those who have to deploy, manage, troubleshoot and support this product – THIS IS A NIGHTMARE! for obvious reasons, some of them mentioned above.

I think that the same is happening in OpenStack. The proliferation of projects, the number of developers and just the sheer size of the solution is becoming very hard to manage. Goals of one projects do not necessarily align with that of other projects, and cross project collaboration is not the “best” at the moment between the projects.

Randy Bias said yesterday in his talk at OpenStackSV (not an exact quote):

The problem  is that OpenStack does not currently have a unifying vision or product strategy. With the growth in programs, OpenStack’s more consistent mission starts to be degraded and less meaning around OpenStack occurs.

Each program team tends to have their own view of what OpenStack is. This does not mean there needs to be a dictator in OpenStack, but there does need to be product leadership.

(Source – vElemental)

Adrian Ionel also raised some thought yesterday on the subject.

The general developer does not care about things below or around the application like monitoring software, storage, network, or the hypervisor. They care about API quality and ease of use, feature velocity (not about OpenStack plumbing), and portability (devs want to write things once). Is OpenStack too intrinsic and poorly focused? Are infrastructure vendors moving focus away from critical areas?

In conclusion, he believes that there are some tangible things that can be done.
– Focus on API (awesome, well documented, easy to use, consistent, backwards compatible)
– Invest in ease of use vs flexbile plumbing (could have too many options)
– Don’t move up stack partner instead (LB, DBaaS, etc)
– Reshape upstream engineering to foster open competition inside OpenStack projects (engineering come up with solutions and compete while staying in framework and let market forces choose) vs central planning

(Source – vElemental)

I am not sure I agree with everything that Adrian said, especially not the part about competition. Yes this does produce better products in the end – due to the competition, but it also generates a lot of work (and lines of code) that will probably go down the drain in the end. I do not think that the OpenStack Community is currently in the position to allow themselves that luxury. Code reviews and code fixes are not being completed in the fastest time possible. Added to that extra code in two competing projects for the same purpose will only emphasize this point.

One last thing. The OpenStack WTE (Win the Enterprise) initiative is something that should have been kicked off 2 years ago.

No matter how good the product is – if people cannot use it or it does not provide the functionality that people need, it will not be adopted.

There are things, basic things that the Enterprise needs today, and have been asking for a number of years and can get them today in competing products.

I am not saying that OpenStack has to cater for every single kind of workload, for every application, for every single scenario - not at all. But the vision, the direction has to be there. Lay down the basic foundation of what OpenStack is going after, when you think it will happen, and more importantly what will not be supported or worked on in the upcoming future.

The feeling that I have (and I am not the only one) is that each project is doing what is best for them – and not necessarily what is best for OpenStack.

The Technical Committee’s charter:

The OpenStack Technical Committee provides technical leadership for OpenStack as a whole. Responsibilities include enforcing OpenStack ideals (such as Openness, Transparency, Commonality, Integration and Quality), deciding on issues that impact multiple programs, providing an ultimate appeals board for technical decisions and general oversight. It is a fully-elected Committee that represents the contributors to the project, and more details about membership and programs may be found in its charter.

I think at the upcoming Summit this is going to be a hot discussion topic. The important thing is that it should be an open discussion, and I think taking into consideration not only the development part of the community – but also the User and operators as well.

As always I would be happy to help out in any way I can.

The OpenStack Architecture Design Book Authors Speak

In the OpenStack Design Summit I asked the authors the same 5 questions in order to get their thoughts and feelings on OpenStack, the community and the future.
  1. How many years have you been working with OpenStack?
  2. What is your favorite thing about OpenStack?
  3. What is that you dislike about OpenStack?
  4. If there was only one thing you could change/improve in OpenStack - what would it be?
  5. Where do you think Openstack will be 3 years time?
Here are their responses.
Beth Cohen, Cloud Technology Strategist – Verizon
    1. 3 years.
    2. It is a strong community of companies and people who want to build the best cloud platform in the world.
    3. It is a bunch of petty developers snipping at each other from their little fiefdoms.
    4. Better integration of the parts.
    5. Everywhere!
Sean Winn, Cloud Services Network Engineer – CloudScaling
    1. 2 years.
    2. I love that OpenStack is an open-source, community-developed system which, when leveraged properly within an organization, can have tremendous impact on every aspect of how that company does business. The effects of OpenStack on business operational efficiency and agility are incredible to me.
    3. Lack of cohesiveness between projects is one of the biggest problems that I see facing OpenStack. Features are sometimes developed without consideration of other OpenStack projects implementations of same or similar features.
    4. More cooperative efforts between projects to develop features with parity.
    5. The most widely deployed data center and cloud solution.
Kenneth Hui, Business Development Manager, Cloud Solutions – EMC
    1. 2 years.
    2. The collaborative nature of the community.
    3. Lack of focus in terms of development. Too many people chasing the newest shiny thing.
    4. Better product management.
    5. Leading private cloud platform.
Nick Chase, Technical Marketing Manager - Mirantis
    1. 2 years.
    2. The "open" nature of OpenStack means that anybody can get involved, and anybody can make it do what they need it to, if they are willing to put in the work. The possibilities are endless, and I'm passionate about that.
    3. I'm sure there's much that I "dislike" exactly, though there are some things I wish worked better, or were easier to use. Deployment could be a little easier, of course.
    4. Public perception. :)
    5. Complete convergence so that hybrid and multi-cloud are not just normal but transparent.
Kevin Jackson, Principal Cloud Architect – Rackspace
    1. 3 years.
    2. The fact it's an open source, globally collaborated project that is the first choice when discussing cloud technologies that you can deploy yourself.
    3. Release cycle of 6 months with very little support at present to easily upgrade to match this cadence.
    4. Neutron/Networking - we need to quickly move on from the "Nova-network" vs "Networking" discussion ASAP.
    5. We'll see "OpenStack Compatible" stickers on hardware and software showing ease of integration with the standard privately deploy cloud software.
Anthony Veiga, Senior Network Engineer - Comcast
    1. 2 years.
    2. The flexibility to plug the parts I want and omit the parts I don't. Plus, it's open source so I can't parts I need (which my team has done a lot of).
    3. I dislike the primarily vendor-driven nature of its development. More users need to get involved, and the Foundation should recognize that coders aren't the only contributors.
    4. Add community processes for locking out intentional roadblocking.
    5. A multi-billion dollar per year industry.
Sean Collins, OpenStack Developer – Comcast
    1. 2 years.
    2. Being able to make design decisions that affect the entire company I work for.
    3. Gerrit, Nitpickers.
    4. Nitpickers.
    5. Probably where it is currently.
Vinny Valdez, Principal OpenStack Enterprise Architect - Red Hat
    1. 1 year.
    2. I particularly enjoy how expansive, dynamic and flexible all of the projects yet they all come together in unison.
    3. Many concepts sound great in theory but are not always proven or tested.
    4. Move everything to MongoDB.
    5. The de facto standard way to run applications.
Alexandra Settle, Technical Writer – Rackspace
    1. 1 year.
    2. The community involvement and dedication everyone has to the project.
    3. Unfortunately the documentation is not up to the greatest standard it could potentially be. This however is an ongoing project and I hope to see it through.
    4. Documentation.
    5. Hopefully still progressing. Lots of community based projects die once a 'bigger and better' project is introduced.
I would like to thank all the authors for an amazing week in San Jose – and amazing experience – and an amazing outcome.

M&M's, Snickers and Security in the Cloud

I cannot take credit for this one - I heard it last week at a very interesting talk by Adrian Cockroft at the Speed and Scale Meetup last week in Herzeliya.

The analogy was a very simple one, but very much to the point, and I feel that it was a great way on how we should be looking at security in the cloud.

M&M's - A lot of themM&M's are the one thing that me kids always ask me to bring back for them when I go to the States, especially the ones without the peanuts.

What is great about M&M's? They have a hard shell, that protects the great soft chocolate inside. The shell is not unbreakable, but hard enough to protect the the great stuff on the inside.
But once the shell is broken, you have nothing but chocolate.

snickersA Snickers bar on the other hand, has a nice soft chocolate on the outside, but inside there are many crunchy nuts, each of them are hard and do not need the chocolate to protect them, because they are hard enough to look after themselves.

OK, enough about chocolate, what the heck does this have with cloud and security?

Traditionally, we are used to having a perimeter devices that protect everything behind them, and within the perimeter we are good to go, there is an elevated level of trust, just like the hard shell of an M&M and the soft chocolate inside.

I do not think this will suffice in a cloud environment. I don’t think you should either. Each of the above methodologies have their advantages but they have disadvantages as well.

In the cloud, you do have the option using a perimeter devices, creating VPC's with most of the providers today.

I think that we should treat our cloud environment like a Snickers bar. The outside is always soft, vulnerable, untrustworthy. You will not know what instances/vm’s are running on the same host as you are. Do they have access to the network subnet you are using? So what protects us? Only ourselves, the hard nuts.

Each and every cloud instance should assume that the environment it lives in is hostile. It will be constantly under attack from the dark side of the force.

That is why it should be locked down and its own security as tight as possible. This can be done in a number of ways which could include:

  • Minimal operating system with no bloated software or unnecessary packages
  • Minimal privileges to users running applications, everything should be access controllers sudo for example, SELinux also
  • iptables on the instance – allowing only certain services to be open to external traffic
  • SSH Key authentication to the instances – no passwords
  • Security group access – defining what traffic will be allowed within your cloud – between the instances.

I am always looking for simple ways to explain sometimes complex terminology or concepts to people – and I found this one to be highly useful.