Rally on OpenStack summit in Vancouver

OpenStack summit  just had finished and it’s time to summarize all Rally events.

Using Rally for OpenStack certification at Scale!

It goes without saying that one of the most important things all OpenStack clouds from big to small is to be 100% sure that everything works as expected BEFORE taking on production workloads.

To ensure that production workloads will be successful, we can do a few key things:

  1. Generate real load test from “real” OpenStack users
  2. Collect and retain detailed execution data for examination and historical comparison
  3. Measure workload performance and failure rate against established SLAs to validate a deployment
  4. Visualize results in beautiful graphic detail

Rally can fully automate these steps for you and save dozens if not hundreds of hours.

(more…)

Read More

Rally v0.0.4 – What’s new?

Rally v0.0.4

Information

+------------------+-----------------+
| Commits          |       87        |
+------------------+-----------------+
| Bug fixes        |       21        |
+------------------+-----------------+
| Dev cycle        |     30 days     |
+------------------+-----------------+
| Release date     |   14/May/2015   |
+------------------+-----------------+

Details

This release contains new features, new benchmark plugins, bug fixes, various code and API improvements.

New Features & API changes

  • Rally now can generate load with users that already existNow one can use Rally for benchmarking OpenStack clouds that are using LDAP, AD or any other read-only keystone backend where it is not possible to create any users. To do this, one should set up the “users” section of the deployment configuration of the ExistingCloud type. This feature also makes it safer to run Rally against production clouds: when run from an isolated group of users, Rally won’t affect rest of the cloud users if something goes wrong.
  • New decorator @osclients.Clients.register can add new OpenStack clients at runtimeIt is now possible to add a new OpenStack client dynamically at runtime. The added client will be available from osclients.Clients at the module level and cached. Example:
       >>> from rally import osclients
       >>> @osclients.Clients.register("supernova")
       ... def another_nova_client(self):
       ...   from novaclient import client as nova
       ...   return nova.Client("2", auth_token=self.keystone().auth_token,
       ...                      **self._get_auth_info(password_key="key"))
       ...
       >>> clients = osclients.Clients.create_from_env()
       >>> clients.supernova().services.list()[:2]
       [, ]
    
  • Assert methods now available for scenarios and contextsThere is now a new FunctionalMixin class that implements basic unittest assert methods. The base.Context and base.Scenario classes inherit from this mixin, so now it is possible to use base.assertX() methods in scenarios and contexts.
  • Improved installation scriptThe installation script has been almost completely rewritten. After this change, it can be run from an unprivileged user, supports different database types, allows to specify a custom python binary, always asks confirmation before doing potentially dangerous actions, automatically install needed software if run as root, and also automatically cleans up the virtualenv and/or the downloaded repository if interrupted.

(more…)

Read More

Rally review dashboard

Reviewing process with Gerrit is very nice until you got more than 100 open patches on review, after that it becomes really hard task.

Fortunately  Gerrit allows you to build custom dashboards that allows to group changes by various criteria like patches that passed CI and don’t have -1 in code review. Making a properly URL that will build custom dashboard is quite hard task, but it can be simplified a lot if you use gerrit-dash-creator. Roman Vasylets did a great job and create a Rally Dashboard url:

dashboard

Now all patches are grouped! Read more to see how patches are grouped…

(more…)

Read More

Rally can generate load with passed users now!

Finally, I happy to announce that OpenStack Rally team, after more than 1 year of work, finished support of benchmarking with already existing users in OpenStack. This is crucial feature that simplifies adoption of Rally in enterprise world.

Why it’s so important?

There are 2 very important use cases from production world:

  1. It’s simpler to run Rally against production cloud
    Rally can use existing users instead of creating own which is impossible in case of r/o Keystone backends like LDAP and AD.
  2. It’s safer to run Rally against production cloud
    Rally can be run from isolated group of users and if something went wrong it won’t affect rest of the cloud users.

(more…)

Read More

Rally v0.0.3 – What’s new?

Rally v0.0.3

New Features & API changes

  • Add the ability to specify versions for clients in benchmark scenarios
    You can call self.clients(“glance”, “2”) and get client initialized for specific API version.
  •  Add API for tempest uninstall
    $ rally-manage tempest uninstall    # removes fully tempest for active deployment
  • Add a –uuids-only option to rally task list
    $ rally task list –uuids-only    # returns list with only task uuids
  • Adds endpoint to –fromenv deployment creation
    $ rally deployment create –fromenv
    # recognizes standard OS_ENDPOINT environment variable
  • Configure SSL per deployment
    Now SSL information is deployment specific not Rally specific and rally.conf option is deprecated. Take a look at sample.

(more…)

Read More

The simplest way in python to mock open() during unit testing

I believe that the most of python developers are familiar with pretty mock framework. It’s one of the simplest way to avoid unit testing of not interesting parts of your code: such like DB, files, API of other services, some heavy libs like libvirt and so on..

Unfortunately, there is one unclear place (at least for me)  how to mock open() properly especially in case where when with open() in used in code.

I spend a bit of time to resolve this problem and here is the code snippet:

import testtools
import mock


def some(file_path):

    with open(file_path) as f:
        return f.read()


class MyTestCase(testtools.TestCase):

    @mock.patch("open", create=True)
    def test_some(self, mock_open):
        mock_open.side_effect = [
            mock.mock_open(read_data="Data1").return_value,
            mock.mock_open(read_data="Data2").return_value
        ]

        self.assertEqual("Data1", some("fileA"))
        mock_open.assert_called_once_with("fileA")
        mock_open.reset_mock()

        self.assertEqual("Data2", some("fileB"))
        mock_open.assert_called_once_with("fileB")

Read More

Rally “verify” as the control plane for Gabbi, Tempest & in-tree functional tests

It goes without saying that making OpenStack easy testable is crucial for future of OpenStack adoption. Let’s see how OpenStack testing process can be improved by Rally functional testing control plane.

OpenStack architecture pros & cons in nutshell

OpenStack has micro-services architecture. There are bunch of projects, each project has bunch of services, and these services are collaborating together to provide IaaS and PaaS. Micro services approach is not the silver bullet architecture that resolves all issues.

Benefits of Micro Services approach

  1. Isolation. Every part of system: VM, Volumes, Images, Object Storage is separated project, with separated API. So even if implementation is bad it can be rewritten without affecting other parts of system.
  2. Scale. Projects are developed separately, this means separated teams (with their experts and leads) are working on separated projects.

Issues of micro services approach

  1. Common functionality. If you would like to add new API method to all services, or to new
  2. Deployment configuration and management. You need to use separated projects that will just install and manage it. Like Fuel, RDO, JuJu.
  3. CI/CD. Testing requires very smart CI/CD that can pick proper versions of every project, configure all projects properly, start all services and then run tests.
  4. Testing. Every project requires test that brings big issues. Those issues and how to mitigate them is the goal of this blogpost.

Why it is hard to test OpenStack?

(more…)

Read More

The simplest way to use OpenStack python clients

OpenStack is great but as all young projects it has some UX issues. One of the most user facing is initialization of OpenStack python clients. Every OpenStack Service like Nova, Cinder, Glance have own.

Since the begging in Rally we have internal class that unifies initialization of all clients. Recently we merged patch that allows to init this class from environment variables, which makes it really simple to use OpenStack python clients. Take a look at code snippet below:

boris@ubuntu:~$ . devstack/openrc admin admin

boris@ubuntu:~$ python
>>> from rally import osclients
>>> clients = osclients.Clients.create_from_env()
>>> clients.nova().flavors.list()
[<Flavor: m1.tiny>, <Flavor: m1.small>, <Flavor: m1.medium>, <Flavor: m1.large>, <Flavor: m1.nano>, <Flavor: m1.heat>, <Flavor: m1.xlarge>, <Flavor: m1.micro>]

(more…)

Read More

Rally Tricks: “Stop load before your OpenStack goes wrong”

Benchmarking pre-production and production OpenStack clouds is not a trivial task. From the one side it’s important to reach the OpenStack cloud’s limits, from the other side the cloud shouldn’t be damaged. Rally aims to make this task as simple as possible. Since the very beginning Rally was able to generate enough load for any OpenStack cloud. Generating to big load was the major issue for production clouds, because Rally didn’t know how to stop the load until it was to late. Finally I am happy to say that we solved this issue.

With the new feature “stop on SLA failure” things are much better. 

(more…)

Read More