Quick Tip: Setting the Color Space Value in Wayland

Some televisions and monitors are limited to the “broadcast RGB” color range. This is a subset of an 8-bit range of levels from 0-255 – in this case, 16-235. You’ll find this referred to as 16:235 in some cases.

You can find a lot more on this here: http://kodi.wiki/view/Video_levels_and_color_space

If you’re using Xorg this can be adjusted using the xrandr with something along the lines of:

xrandr --output HDMI-0 --set output_csc tvrgb

for Radeon devices. For Intel cards, it will look like this:

xrandr --output HDMI-0 --set "Broadcast RGB" "Limited 16:235"

So – if you’ve noticed your colors are washed out and were wondering why, the above is a good starting point for you. But wait! There’s more.


Maybe you’re like me, and you switched to Wayland. You had solved the above problem in Xorg, but you can’t find a way to do the same thing with Wayland.

The short answer: Wayland protocol itself doesn’t provide a facility for this, and the developers are leaving that up to the compositor to manage[1], and this hasn’t been implemented yet[2].

The way to do this is with the ‘proptest’ utility, a test suite built with libdrm. You typically won’t find this distributed anywhere. This solution is hacky but it gets the job done.

Build libdrm

Build the latest libdrm. At the time of this writing that’s 2.4.82. You’ll need requisite build dependencies provided by your distribution.

Build it, don’t install it:

./configure && make

From here I fished ‘proptest’ out of ./test/proptest/.lib/ and placed it in /usr/local/bin.


If you run the proptest command without arguments you’ll receive a list of connectors and properties.

  /root/libdrm-2.4.82/tests/proptest/.libs/proptest [options]
  /root/libdrm-2.4.82/tests/proptest/.libs/proptest [options] [obj id] [obj type] [prop id] [value]

  -D DEVICE  use the given device
  -M MODULE  use the given driver

The first form just prints all the existing properties. The second one is
used to set the value of a specified property. The object type can be one of
the following strings:
  connector crtc

  proptest 7 connector 2 1
will set property 2 of connector 7 to 1

Among these properties will be the specific one controlling output colorspace. For Intel cards this will be ‘Broadcast RGB’ and for Radeon it will be ‘output_csc’. Nouveau may or may not have a property for this, don’t know. Sample output from my laptop below:

trying to open device 'i915'...done
Connector 48 (eDP-1)
    1 EDID:
        flags: immutable blob

    2 DPMS:
        flags: enum
        enums: On=0 Standby=1 Suspend=2 Off=3
        value: 0
    5 link-status:
        flags: enum
        enums: Good=0 Bad=1
        value: 0
    52 audio:
        flags: enum
        enums: force-dvi=18446744073709551614 off=18446744073709551615 auto=0 on=1
        value: 0
    53 Broadcast RGB:
        flags: enum
        enums: Automatic=0 Full=1 Limited 16:235=2
        value: 0
    54 scaling mode:
        flags: enum
        enums: None=0 Full=1 Center=2 Full aspect=3
        value: 3

In my case, property 53 is the ‘Broadcast RGB’ property. These numbers will vary on your own system.

Based on all of the above, you’d need to run:

proptest -M i915 -D /dev/dri/card0 48 connector 53 2


This doesn’t seem to take effect while Wayland is running, it has to be run beforehand.

Putting it all together

Since I use GNOME I made a copy of ‘gdm.service’ in /etc/systemd/system and made the following addition:

ExecStartPre=/usr/local/bin/proptest proptest -M i915 -D /dev/dri/card0 48 connector 53 2

So that the entire unit looks like:

Description=GNOME Display Manager

# replaces the getty

# replaces plymouth-quit since it quits plymouth on its own

# Needs all the dependencies of the services it's replacing
# pulled from getty@.service and
# (except for plymouth-quit-wait.service since it waits until
# plymouth is quit, which we do)
After=rc-local.service plymouth-start.service systemd-user-sessions.service

# GDM takes responsibility for stopping plymouth, so if it fails
# for any reason, make sure plymouth still stops

ExecStartPre=/usr/local/bin/proptest proptest -M i915 -D /dev/dri/card0 48 connector 53 2


I disabled the gdm service with ‘systemctl disable gdm’ and used ‘systemctl enable gdm’ to enable my edited unit file – units in /etc/systemd/system override the other paths.

This did the trick for me. As you can see the process is pretty raw – you can tell some aspects of Wayland are not yet mature. I’m confident they will be though soon enough, and this solved my last blocker to using it full time.

Hope this helps somebody out there.

Using Let’s Encrypt! with Kerio Operator

This assumes you internally maintain a certbot host which retrieves certificates, and then you fetch those certs to the frontend / application servers that need them. It is also assumed you have enabled SSH for your Kerio Operator install.

WARNING: This will update your kerio database directly. Do not attempt unless you understand the implications and have made a backup.

  1. mkdir -pv /var/etc/letsencrypt/live/
  2. EDITOR=vim crontab -e:
    0 0 1 * * /usr/bin/scp -o StrictHostKeyChecking=no -r -i /var/etc/letsencrypt/ssl-sync.pem root@certbot-host:/etc/dehydrated/certs/* /var/etc/letsencrypt/live/ && /bin/sh /var/etc/letsencrypt/update-kerio.sh
  3. /var/etc/letsencrypt/update-kerio.sh:
    key_contents=$(cat $key)$'\n'
     cert_contents=$(cat $cert)$'\n'
    query="insert into ssl_certs values (NULL, '$key_contents', '$cert_contents', NULL) returning SSL_CERTS_ID;"
     NEW_ID=$(echo "$query" | isql-fb -u sysdba -p masterkey /var/lib/firebird/2.0/data/kts.fdb | tail -n2 - | tr -d '[:space:]')
    query="update HTTP_SERVER set SSL_CERTS_ID=${NEW_ID} where SSL_CERTS_ID!=0;"
     echo "$query" | isql-fb -u sysdba -p masterkey /var/lib/firebird/2.0/data/kts.fdb

    This adds the letsencrypt cert to the database and sets it active. Note that you’ll have a new cert in the database with each run of the cron job, and you’ll eventually want to clean out old ones. Some work could be done to check that the certificate has changed before running the update-kerio.sh script.

Lark: Gentoo in the Windows Subsystem for Linux


Microsoft’s recent introduction of the Subsystem for Linux (awkwardly called ‘Bash on Ubuntu on Windows’) had me intrigued from the day of its announcement.

Though it’s a transparent attempt to keep developers from leaving their Windows environments behind in a world now focused on development for UNIX-like platforms, and though I’m not particularly interested in supporting such an agenda per-se, I find the notion of a new NT kernel subsystem capable of handling Linux syscalls exotic, and so I had to subject it to some stress testing.

Test 1: Gentoo Stage3

The first test was simply to see whether or not I could unpack a Gentoo stage3 tarball and replace the Ubuntu rootfs with it. This took the following form:

  1. Download the latest stage3-nomultilib tarball within the default Ubuntu environment. I used wget to do this, and I was in root’s home directory after a ‘sudo su -‘ so that proper filesystem attributes and permissions would be applied to the extracted files when I performed the next step:
  2. Unpack it into a directory I named rootfs_gentoo
  3. Exit all open ‘Bash on Ubuntu on Windows’ shells
  4. Using Windows Explorer, cut the rootfs_gentoo folder (as mentioned in 1. this was in root’s home directory – in the Windows environment, this is located at \Users\<me>\AppData\Local\lxss\root) and paste it in \Users\<me>\AppData\Local\lxss\
  5. Rename the existing ‘rootfs’ containing the Ubuntu install downloaded by Microsoft
  6. Rename my new ‘rootfs_gentoo’ folder to ‘rootfs’

I also had to make sure the Linux subsystem opened a shell as root, as this new Gentoo environment had no users created just yet. This was accomplished with:

C:\ > lxrun /setdefaultuser root

In an elevated command prompt window.

So far so good. I created a user for myself, added it to wheel, set a password, and ran the same command above to set the Linux environment to use this new user.

I also needed to manually update /etc/resolv.conf in order to perform DNS lookups.

Test 2: Rebuild @world

I wanted to quickly strain WSL’s capabilities. I made sure /etc/portage/make.conf was configured so that ~amd64 packages would be installed, and then:

# emerge gcc -u 

# gcc-config x86_64-pc-linux-gnu-6.3.0

# . /etc/profile

# emerge world -eav --keep-going

Impressively, around 200 packages were rebuilt from source using the latest GCC (elucidation for those unfamiliar with Gentoo) without an issue. Things are getting serious.

In an upcoming post I’ll discuss how I built on this basis to launch Gentoo’s OpenRC init system, and use that to run services like SSHD.

OpenStack: Fix Security Group Enforcement on Compute Nodes

I discovered the instances in my home lab were not being protected by the neutron security groups I’d set up for them – what followed was a week-long odyssey to discover the culprit. IPTables rules were being put in place, and every other aspect of Neutron networking was just fine. Making things more mysterious, a test deploy on my own desktop, with the host running Fedora 23 and the containers running CentOS 7, did not manifest the issue.

A handy diagram from the excellent “OpenStack Hackspace” – click image to check it out.

Security groups are applied to the tap interface closest to the virtual machine, shown above (not mine – click the image to see the rest of that excellent tutorial).

It turns out there’s a pair of kernel tunables that govern whether or not iptables rules are applied to  interfaces that are members of a bridge, as in this instance. It also turns out that Red Hat, for some reason, toggled the default value of this tunable to 0 after RHEL 7 was released.

These tunables, and their required values, are:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

And this needs to be set on the outer host, if you run your OpenStack cloud in containers. If not and/or otherwise, you need this set on your compute nodes as this is where security groups are set.

Hope this saves someone a week of troubleshooting! 🙂

OpenStack: Dedicate Compute Hosts to Projects

Use case: We want to deploy instances on a particular set of compute hosts because of their special or specialized capabilities.

On the API server(s) ensure that the following scheduler_default_filters are set: AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation

Dedicate Compute Hosts To a Specific Project

Create a host aggregate:

$ nova aggregate-create DedicatedCompute
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host1
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host2
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host3

Add a key=value pair to the host aggregate metadata – we’ll use this to match against later.

$ nova aggregate-set-metadata DedicatedCompute filter_tenant_id=<Tenant ID Here>

We are here using the AggregateMultiTenancyIsolation filter. If we stop here, only members of the specified tenant will be able to create instances on hosts in this aggregate – but instances will also spawn on any other host that is either not in an aggregate, or has no filter_tenant_id metadata set. We want to isolate these hosts to a specific project.

Isolate Hosts To a Specific Project

We do so by Creating a flavor and giving it specific metadata:

$ nova flavor-key m1.dedicated set aggregate_instance_extra_specs:filter_tenant_id=<Tenant ID Here>

We are here invoking the AggregateInstanceExtraSpecsFilter filter. Note a couple of things:

  1. We’re filtering on the filter_tenant_id= tag we applied to the host aggregate above. This is a convenience – we could have set another arbitrary key=value pair in the host aggregate’s metadata and used that to match against here. This is conceptually important for the purpose of understanding how the two filters work – they don’t work together, we just happen to be using the same tags.
  2. The format of the above is very important. If you specify this in any other form the ComputeCapabilitiesFilter will try to match the resultant tag to a host and fail to start an instance with that flavor. This can make troubleshooting interesting – I had to walk through the code path of the nova scheduler and the filters to find this out. Fun!

Isolate Storage To a Specific Project

In this project’s case we want a specific storage pool, itself dedicated to a specific set of hosts and disks, available for use by instances in this project – but not other projects. We have created a volume backend called ‘elasticsearch’ that points to this storage pool, and will now create a Cinder volume type that makes use of it.

$ cinder type-create dedicated
$ cinder type-key dedicated set volume_backend_name=dedicated

We start by ensuring that all other projects will not be able to use this volume type:

$ for project in `openstack project list -f value | awk '{print $1}'`; do cinder quota-update --volumes 0 --volume-type=dedicated $project; done

We then grant a quota for this specific volume type to our special project:

$ cinder quota-update --volumes 100 --volume-type dedicated <Tenant ID Here>


  • http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
  • https://bugs.launchpad.net/openstack-manuals/+bug/1445285
  • http://www.hitchnyc.com/openstack-multi-tenant-isolation/

Importing an OpenStack VM into Amazon EC2

Some quick notes for those interested:

  • Install EC2 API TOOLS
  • Set Access and Secret key environment variables:
    • export AWS_ACCESS_KEY=xxxxxxxxxxxxxxx
    • export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  • Set up an S3 bucket and secure it to taste, to be specified later
  • If exporting from OpenStack, remove the cloud-init package
  • Note all prep considerations on http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/PreparingYourVirtualMachine.html
  • Export the VM image – I’m using OpenStack (KVM) images in raw format, so these needed no initial conversion. You can also use VHD or VMDK
    • Docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ImportingYourVM.html
    • Command I used: ec2-import-instance ./my-awesome-image.img -f RAW -t c4.xlarge -a x86_64 -b my-awesome-bucket –region us-east-1 –availability-zone us-east-1a –subnet subnet-xxxxxx -o ${AWS_ACCESS_KEY} -w ${AWS_SECRET_KEY} -p Linux

You need to take the destination region into account – your subnet and S3 bucket be available to the region/availability zone you specify above.

The import takes time after the upload is complete – watch progress with ec2-describe-conversion-tasks.

Privacy Helper for Windows 10

During my evaluation of Windows 10 I’ve cobbled together a script that disables most known anti-privacy features in the new system. It also removes the unnecessarily installed default Modern apps. The script was designed with a professional environment in mind, but applies equally well to home users, and most settings will apply to all editions of Windows, not just Pro/Enterprise

It’s alarming but not surprising that we’ve gotten to this point – Windows has always served the agenda of its maker before that of its user. This edition ships with a significant set of end user facing UI improvements, but its goals are not different. They simply take advantage of the current state of the art and the current anti-privacy climate.

The script can’t be considered complete as not all anti-privacy features are documented by Microsoft, and future updates will doubtless add more.  list of changes – comment each section of the script out if one of these is undesired:

  • Stops / Disables intrusive diagnostics services
  • Removes diagnostic scheduled tasks
  • Removes OneDrive
  • Removes access to the Windows Store
  • Removes access to Cortana
  • Block connection to Microsoft Accounts
  • Disables WiFi Sense (what’s this?)
  • Disables Windows Update peer to peer (what’s this?)
  • Requires Ctrl-Alt-Del to log on
  • Blocks “Add features to Windows 10”
  • Removes unnecessary apps (they can be reinstalled from the Store if desired, assuming you’ve left it enabled)


Standalone – an extension for Mozilla Firefox

Found this little gem today, thought I’d mention it on my corner of the web.


Creates site specific apps, like Prism used to do – for Chrome users, this is the equivalent of “Create Application Shortcuts” which I find indispensable personally.

The strange thing is, it’s been around since 2013. How did I miss it for that long? I’ve been looking everywhere for something like this.

Thoughts on Docker

I like the concept of Docker and containerization in general, but I have some pretty fundamental concerns:

Thought experiments:

  • How many deployed docker images were torn down and redeployed upon the revelation of heartbleed? Of shellshock? In practice, not in theory.
  • How many Docker images are regularly destroyed and redeployed for the purpose of updating their userlands? Again, in reality, even with the most agile orchestration.
  • How many Docker images are actually deployed with a minimal attack surface, that being only the executables and libraries they need, rather than entire userlands?
  • How many Docker images are given to IT/Ops as single filesystem images rather than multi-gigabyte change layers, contributing heavily to wasted storage space?
  • How can Docker images composed of random people’s aging Linux userlands ever be taken seriously in an environment that needs to be kept certified, stable and secure?
  • What is the benefit of Docker given the above, when LXC and Libvirt-LXC performs the same containerization and provides Ops with much greater flexibility in terms of orchestration and change management, and has for years?
  • Dan Walsh of Red Hat has much to say about the security of Docker and LXC containers – the most important statement he makes is that “containers don’t contain” – containers provide no security, they are only useful for the purpose of deploying applications in a manageable way. Given this, is it responsible to use containers based on full Linux filesystems? If you do, you’d better be ready to tear down your ENTIRE stack each and every time a major vulnerability comes to light.

Points worth pondering – these affect the future direction of container technology and shed light on the implications.

Running Systemd within a Docker image

NOTE: This is not for general purpose use – CAP_SYS_ADMIN grants the container a large number of dangerous privileges – this should be used only by a sysadmin (Ops) – not a developer – for the purpose of containerizing infrastructure.

Some of my insights gained on running systemd within docker – I’m aware the general idea is to run a single process, but that’s for developers. I’m a sysadmin, so I know that underlying “docker” as a management system is a full featured Linux namespace/cgroup facility allowing me to run a fully containerized Linux userland.

This is cobbled together from some of the example Dockerfiles provided by Red Hat’s Project Atomic as well as personal research into systemd, unprivileged container limitations, and the Linux capability system. This isn’t guaranteed to work on anything but the latest version of Docker at the time of this post – 1.4.1. May work on a previous version if it has support for –add-cap.

FROM centos:centos7
MAINTAINER "Brad Laue" <brad@brad-x.com>
RUN yum -y update; yum clean all \
yum -y swap -- remove fakesystemd -- install systemd systemd-libs \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
RUN systemctl mask getty.target swap.target
RUN systemctl disable graphical.target; systemctl enable multi-user.target
VOLUME ["/sys/fs/cgroup"]
VOLUME ["/run"]
VOLUME ["/export"]
ENV container=lxc
CMD ["/usr/lib/systemd/systemd"]

Build with:

docker build --rm -t bradlaue/centos-systemd .

Run with:

docker run -ti --cap-add=SYS_ADMIN -e container=lxc bradlaue/centos-systemd

Note this does not require a fully privileged container instance as many seem to indicate – systemd requires only CAP_SYS_ADMIN in order to avoid a segfault.

From here you can build out a container by installing/running standard Linux services in the normal systemd way, including process monitoring.