OpenStack: Fix Security Group Enforcement on Compute Nodes

I discovered the instances in my home lab were not being protected by the neutron security groups I’d set up for them – what followed was a week-long odyssey to discover the culprit. IPTables rules were being put in place, and every other aspect of Neutron networking was just fine. Making things more mysterious, a test deploy on my own desktop, with the host running Fedora 23 and the containers running CentOS 7, did not manifest the issue.

A handy diagram from the excellent “OpenStack Hackspace” – click image to check it out.

Security groups are applied to the tap interface closest to the virtual machine, shown above (not mine – click the image to see the rest of that excellent tutorial).

It turns out there’s a pair of kernel tunables that govern whether or not iptables rules are applied to  interfaces that are members of a bridge, as in this instance. It also turns out that Red Hat, for some reason, toggled the default value of this tunable to 0 after RHEL 7 was released.

These tunables, and their required values, are:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

And this needs to be set on the outer host, if you run your OpenStack cloud in containers. If not and/or otherwise, you need this set on your compute nodes as this is where security groups are set.

Hope this saves someone a week of troubleshooting! 🙂

OpenStack: Dedicate Compute Hosts to Projects

Use case: We want to deploy instances on a particular set of compute hosts because of their special or specialized capabilities.

On the API server(s) ensure that the following scheduler_default_filters are set: AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation

Dedicate Compute Hosts To a Specific Project

Create a host aggregate:

$ nova aggregate-create DedicatedCompute
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host1
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host2
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host3

Add a key=value pair to the host aggregate metadata – we’ll use this to match against later.

$ nova aggregate-set-metadata DedicatedCompute filter_tenant_id=<Tenant ID Here>

We are here using the AggregateMultiTenancyIsolation filter. If we stop here, only members of the specified tenant will be able to create instances on hosts in this aggregate – but instances will also spawn on any other host that is either not in an aggregate, or has no filter_tenant_id metadata set. We want to isolate these hosts to a specific project.

Isolate Hosts To a Specific Project

We do so by Creating a flavor and giving it specific metadata:

$ nova flavor-key m1.dedicated set aggregate_instance_extra_specs:filter_tenant_id=<Tenant ID Here>

We are here invoking the AggregateInstanceExtraSpecsFilter filter. Note a couple of things:

  1. We’re filtering on the filter_tenant_id= tag we applied to the host aggregate above. This is a convenience – we could have set another arbitrary key=value pair in the host aggregate’s metadata and used that to match against here. This is conceptually important for the purpose of understanding how the two filters work – they don’t work together, we just happen to be using the same tags.
  2. The format of the above is very important. If you specify this in any other form the ComputeCapabilitiesFilter will try to match the resultant tag to a host and fail to start an instance with that flavor. This can make troubleshooting interesting – I had to walk through the code path of the nova scheduler and the filters to find this out. Fun!

Isolate Storage To a Specific Project

In this project’s case we want a specific storage pool, itself dedicated to a specific set of hosts and disks, available for use by instances in this project – but not other projects. We have created a volume backend called ‘elasticsearch’ that points to this storage pool, and will now create a Cinder volume type that makes use of it.

$ cinder type-create dedicated
$ cinder type-key dedicated set volume_backend_name=dedicated

We start by ensuring that all other projects will not be able to use this volume type:

$ for project in `openstack project list -f value | awk '{print $1}'`; do cinder quota-update --volumes 0 --volume-type=dedicated $project; done

We then grant a quota for this specific volume type to our special project:

$ cinder quota-update --volumes 100 --volume-type dedicated <Tenant ID Here>

References:

  • http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
  • https://bugs.launchpad.net/openstack-manuals/+bug/1445285
  • http://www.hitchnyc.com/openstack-multi-tenant-isolation/

Importing an OpenStack VM into Amazon EC2

Some quick notes for those interested:

  • Install EC2 API TOOLS
  • Set Access and Secret key environment variables:
    • export AWS_ACCESS_KEY=xxxxxxxxxxxxxxx
    • export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  • Set up an S3 bucket and secure it to taste, to be specified later
  • If exporting from OpenStack, remove the cloud-init package
  • Note all prep considerations on http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/PreparingYourVirtualMachine.html
  • Export the VM image – I’m using OpenStack (KVM) images in raw format, so these needed no initial conversion. You can also use VHD or VMDK
    • Docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ImportingYourVM.html
    • Command I used: ec2-import-instance ./my-awesome-image.img -f RAW -t c4.xlarge -a x86_64 -b my-awesome-bucket –region us-east-1 –availability-zone us-east-1a –subnet subnet-xxxxxx -o ${AWS_ACCESS_KEY} -w ${AWS_SECRET_KEY} -p Linux

You need to take the destination region into account – your subnet and S3 bucket be available to the region/availability zone you specify above.

The import takes time after the upload is complete – watch progress with ec2-describe-conversion-tasks.