Windows Subsystem for Linux: Installing Arbitrary Distributions

As of the latest version of Windows 10 build 16299.19, also known as the “Fall Creator’s Update”, Linux distributions are now available in the Windows Store, and multiple distributions can be installed and run alongside one another.

This is great news, but the old way did offer one feature the latest release makes a bit more fragile: the ability to install a custom distribution of one’s own choosing.

I’ve seen various ways of doing this with the Fall Creator’s Update published, the most common of which is to install one of the distributions from the App Store and replace its rootfs directory with one of your own. I wanted a more elegant solution. Of particular interest to me are Gentoo and Fedora – note that Fedora will eventually be in the Windows Store officially, but I didn’t want to wait.

The Registry

Linux distributions are now registered in the Windows registry on a per-user basis. This is located at:

\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss\

You can register your own distribution by creating a .reg file containing the following:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss\{6be8e482-66f8-4475-b570-aa8c2fd35809}]
"State"=dword:00000001
"DistributionName"="Fedora"
"Version"=dword:00000001
"BasePath"="C:\\Linux\\Fedora"
"KernelCommandLine"="BOOT_IMAGE=/kernel init=/init ro"
"DefaultUid"=dword:000003e8
"Flags"=dword:00000007
"DefaultEnvironment"=hex(7):48,00,4f,00,53,00,54,00,54,00,59,00,50,00,45,00,3d,\
 00,78,00,38,00,36,00,5f,00,36,00,34,00,00,00,4c,00,41,00,4e,00,47,00,3d,00,\
 65,00,6e,00,5f,00,55,00,53,00,2e,00,55,00,54,00,46,00,2d,00,38,00,00,00,50,\
 00,41,00,54,00,48,00,3d,00,2f,00,75,00,73,00,72,00,2f,00,6c,00,6f,00,63,00,\
 61,00,6c,00,2f,00,73,00,62,00,69,00,6e,00,3a,00,2f,00,75,00,73,00,72,00,2f,\
 00,6c,00,6f,00,63,00,61,00,6c,00,2f,00,62,00,69,00,6e,00,3a,00,2f,00,75,00,\
 73,00,72,00,2f,00,73,00,62,00,69,00,6e,00,3a,00,2f,00,75,00,73,00,72,00,2f,\
 00,62,00,69,00,6e,00,3a,00,2f,00,73,00,62,00,69,00,6e,00,3a,00,2f,00,62,00,\
 69,00,6e,00,3a,00,2f,00,75,00,73,00,72,00,2f,00,67,00,61,00,6d,00,65,00,73,\
 00,3a,00,2f,00,75,00,73,00,72,00,2f,00,6c,00,6f,00,63,00,61,00,6c,00,2f,00,\
 67,00,61,00,6d,00,65,00,73,00,00,00,54,00,45,00,52,00,4d,00,3d,00,78,00,74,\
 00,65,00,72,00,6d,00,2d,00,32,00,35,00,36,00,63,00,6f,00,6c,00,6f,00,72,00,\
 00,00,00,00

Take note of a few things here:

  • The name of the parent key must always be a unique UUID
  • DefaultUid is a hexadecimal representation of the UNIX UID that the subsystem will launch with – make sure you have a user created in your custom Linux environment beforehand.
    • Also note that this user should be able to become root. Add the user to the wheel group and/or preinstall ‘sudo’
  • BasePath is going to be the location of this Linux environment on the host filesystem. This can be placed anywhere as long as it’s specified in this key.
  • Flags set the way it is appears to indicate that a WSL environment has been installed and doesn’t require further bootstrapping. Documentation on this is limited, insights welcome.

Import this .reg file into the registry after customizing it appropriately.

Preparing the Linux filesystem

You’ll need to install one of the Linux distributions from the Windows Store in order to get started. It doesn’t matter which one you choose. Once that’s installed (I won’t cover that setup here, as it’s documented thoroughly elsewhere), become root, and fetch a tarball containing a filesystem for your distribution of choice. In my example we’ll use a Fedora docker image, obtained from Red Hat’s Koji build system.

Make a new directory and cd into it. Extract the tarball:

root ~ # mkdir fedora
root ~ # cd fedora
root ~/fedora # tar Jxf /mnt/c/Users/brad/Downloads/Fedora-Docker-Base-27-20171105.n.0.x86_64.tar.xz

This Fedora tarball contains another tarball within it. A directory with a long sequence of letters and numbers will be created. Tab completion is your friend here. Under this directory the second tarball, layer.tar, contains the actual Linux filesystem. Note this situation is unique to Fedora’s Koji tarballs – others such as Gentoo stage3 directly extract to filesystem contents. Extract the inner tarball:

root ~/fedora # cd 5a666bb24354de5795820e875e208518e8ea0a249e57929639cd2c1797e447e9
root ~/fedora/5a666bb24354de5795820e875e208518e8ea0a249e57929639cd2c1797e447e9 # tar xf layer.tar

Since this Fedora image is intended to be used in a docker container, it lacks a few things we’ll need. We’ll want to copy the resolv.conf file from the parent Linux environment into it:

root ~/fedora/5a666bb24354de5795820e875e208518e8ea0a249e57929639cd2c1797e447e9 # cp -v /etc/resolv.conf etc/

Then we’ll want to chroot into the new Fedora filesystem to be able to work on it further:

root ~/fedora/5a666bb24354de5795820e875e208518e8ea0a249e57929639cd2c1797e447e9 # chroot .

Install the passwd package:

root ~/fedora/5a666bb24354de5795820e875e208518e8ea0a249e57929639cd2c1797e447e9 # dnf -y install passwd

Set a password for root:

[root@Odyssey /]# passwd
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Add a user for yourself, and make sure it’s a member of the wheel group. Note here that I’m specifying a UNIX UID matching that expected by the registry entry above, as well as ensuring the user is a member of the wheel group so it’s permitted to become root as well as use sudo:

[root@Odyssey /]# useradd -u 1000 -m brad
[root@Odyssey /]# passwd brad
Changing password for user brad.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@Odyssey /]# usermod -a -G wheel brad
[root@Odyssey /]# exit
root ~/fedora/5a666bb24354de5795820e875e208518e8ea0a249e57929639cd2c1797e447e9 #

Exit the chroot as depicted in the last two lines above. Recall that in the Registry example above I specified C:\Linux\Fedora as the location for this Linux environment. We’ll need to move it there. You’ll need to close all Linux environment Windows or you’ll have trouble with the following steps.

Find the location of the Linux environment you installed from the store:

C:\Users\brad>powershell Get-ChildItem HKCU:\Software\Microsoft\Windows\CurrentVersion\Lxss

Hive: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss

[ snip ]
32df2d9} DistributionName : openSUSE-42
 Version : 1
 BasePath :
 C:\Users\brad\AppData\Local\Packages\46932SUSE.openSUSELeap42.2_022rs5jcyhyac\LocalState
 PackageFamilyName : 46932SUSE.openSUSELeap42.2_022rs5jcyhyac
 KernelCommandLine : BOOT_IMAGE=/kernel init=/init ro
 DefaultUid : 1000
 Flags : 7
 DefaultEnvironment : {HOSTTYPE=x86_64, LANG=en_US.UTF-8,
 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/
 usr/games:/usr/local/games,
 TERM=xterm-256color}

Navigate to the directories we’ve been working in – these are under the BasePath corresponding to the distribution you installed, which in my case is:

C:\Users\brad\AppData\Local\Packages\46932SUSE.openSUSELeap42.2_022rs5jcyhyac\LocalState\rootfs\root\fedora

Using Explorer, cut (DO NOT copy!) the subdirectory you’ve been working in. Paste it into C:\Linux\Fedora (the BasePath of the Linux environment we defined ourselves) as rootfs. The resulting directory structure should be:

C:\Linux\Fedora\rootfs

Final Steps

We’ve now defined the custom WSL environment in the registry and preconfigured the Linux filesystem we’ll be using. Optionally, you can now set this WSL environment as default:

C:\ >wslconfig /s Fedora

And enter it:

C:\Users\brad>wsl
[brad@Odyssey brad]$

You can also enter a WSL environment by specifying its UUID as you set it in the registry:

C:\Users\brad>wsl {6be8e482-66f8-4475-b570-aa8c2fd35809}
[brad@Odyssey brad]$

Have fun with your new Linux subsystem environment!

Quick Tip: Setting the Color Space Value in Wayland

Some televisions and monitors are limited to the “broadcast RGB” color range. This is a subset of an 8-bit range of levels from 0-255 – in this case, 16-235. You’ll find this referred to as 16:235 in some cases.

You can find a lot more on this here: http://kodi.wiki/view/Video_levels_and_color_space

If you’re using Xorg this can be adjusted using the xrandr with something along the lines of:

xrandr --output HDMI-0 --set output_csc tvrgb

for Radeon devices. For Intel cards, it will look like this:

xrandr --output HDMI-0 --set "Broadcast RGB" "Limited 16:235"

So – if you’ve noticed your colors are washed out and were wondering why, the above is a good starting point for you. But wait! There’s more.

Wayland

Maybe you’re like me, and you switched to Wayland. You had solved the above problem in Xorg, but you can’t find a way to do the same thing with Wayland.

The short answer: Wayland protocol itself doesn’t provide a facility for this, and the developers are leaving that up to the compositor to manage[1], and this hasn’t been implemented yet[2].

The way to do this is with the ‘proptest’ utility, a test suite built with libdrm. You typically won’t find this distributed anywhere. This solution is hacky but it gets the job done.

Build libdrm

Build the latest libdrm. At the time of this writing that’s 2.4.82. You’ll need requisite build dependencies provided by your distribution.

Build it, don’t install it:

./configure && make

From here I fished ‘proptest’ out of ./test/proptest/.lib/ and placed it in /usr/local/bin.

Proptest

If you run the proptest command without arguments you’ll receive a list of connectors and properties.

Usage:
  /root/libdrm-2.4.82/tests/proptest/.libs/proptest [options]
  /root/libdrm-2.4.82/tests/proptest/.libs/proptest [options] [obj id] [obj type] [prop id] [value]

options:
  -D DEVICE  use the given device
  -M MODULE  use the given driver

The first form just prints all the existing properties. The second one is
used to set the value of a specified property. The object type can be one of
the following strings:
  connector crtc

Example:
  proptest 7 connector 2 1
will set property 2 of connector 7 to 1

Among these properties will be the specific one controlling output colorspace. For Intel cards this will be ‘Broadcast RGB’ and for Radeon it will be ‘output_csc’. Nouveau may or may not have a property for this, don’t know. Sample output from my laptop below:

trying to open device 'i915'...done
Connector 48 (eDP-1)
    1 EDID:
        flags: immutable blob
        blobs:

        value:
            00ffffffffffff004d103e1400000000
            28190104a52313780effb3a85334b825
            0c4d5500000001010101010101010101
            0101010101014dd000a0f0703e803020
            35005ac2100000180000000000000000
            00000000000000000000000000fe0037
            50485054824c51313536443100000000
            0002410328001200000b010a20200019
    2 DPMS:
        flags: enum
        enums: On=0 Standby=1 Suspend=2 Off=3
        value: 0
    5 link-status:
        flags: enum
        enums: Good=0 Bad=1
        value: 0
    52 audio:
        flags: enum
        enums: force-dvi=18446744073709551614 off=18446744073709551615 auto=0 on=1
        value: 0
    53 Broadcast RGB:
        flags: enum
        enums: Automatic=0 Full=1 Limited 16:235=2
        value: 0
    54 scaling mode:
        flags: enum
        enums: None=0 Full=1 Center=2 Full aspect=3
        value: 3

In my case, property 53 is the ‘Broadcast RGB’ property. These numbers will vary on your own system.

Based on all of the above, you’d need to run:

proptest -M i915 -D /dev/dri/card0 48 connector 53 2

Caveat

This doesn’t seem to take effect while Wayland is running, it has to be run beforehand.

Putting it all together

Since I use GNOME I made a copy of ‘gdm.service’ in /etc/systemd/system and made the following addition:

ExecStartPre=/usr/local/bin/proptest proptest -M i915 -D /dev/dri/card0 48 connector 53 2

So that the entire unit looks like:

[Unit]
Description=GNOME Display Manager

# replaces the getty
Conflicts=getty@tty7.service
After=getty@tty7.service

# replaces plymouth-quit since it quits plymouth on its own
Conflicts=
After=

# Needs all the dependencies of the services it's replacing
# pulled from getty@.service and
# (except for plymouth-quit-wait.service since it waits until
# plymouth is quit, which we do)
After=rc-local.service plymouth-start.service systemd-user-sessions.service

# GDM takes responsibility for stopping plymouth, so if it fails
# for any reason, make sure plymouth still stops
OnFailure=plymouth-quit.service

[Service]
ExecStartPre=/usr/local/bin/proptest proptest -M i915 -D /dev/dri/card0 48 connector 53 2
ExecStart=/usr/sbin/gdm
KillMode=mixed
Restart=always
IgnoreSIGPIPE=no
BusName=org.gnome.DisplayManager
StandardOutput=syslog
StandardError=inherit
EnvironmentFile=-/etc/locale.conf

[Install]
Alias=display-manager.service

I disabled the gdm service with ‘systemctl disable gdm’ and used ‘systemctl enable gdm’ to enable my edited unit file – units in /etc/systemd/system override the other paths.

This did the trick for me. As you can see the process is pretty raw – you can tell some aspects of Wayland are not yet mature. I’m confident they will be though soon enough, and this solved my last blocker to using it full time.

Hope this helps somebody out there.

Using Let’s Encrypt! with Kerio Operator

This assumes you internally maintain a certbot host which retrieves certificates, and then you fetch those certs to the frontend / application servers that need them. It is also assumed you have enabled SSH for your Kerio Operator install.

WARNING: This will update your kerio database directly. Do not attempt unless you understand the implications and have made a backup.

  1. mkdir -pv /var/etc/letsencrypt/live/
  2. EDITOR=vim crontab -e:
    0 0 1 * * /usr/bin/scp -o StrictHostKeyChecking=no -r -i /var/etc/letsencrypt/ssl-sync.pem root@certbot-host:/etc/dehydrated/certs/* /var/etc/letsencrypt/live/ && /bin/sh /var/etc/letsencrypt/update-kerio.sh
  3. /var/etc/letsencrypt/update-kerio.sh:
     #!/bin/bash
    
    key=/var/etc/letsencrypt/live/talk.brad-x.com/privkey.pem
     cert=/var/etc/letsencrypt/live/talk.brad-x.com/fullchain.pem
    
    key_contents=$(cat $key)$'\n'
     cert_contents=$(cat $cert)$'\n'
    
    query="insert into ssl_certs values (NULL, '$key_contents', '$cert_contents', NULL) returning SSL_CERTS_ID;"
     NEW_ID=$(echo "$query" | isql-fb -u sysdba -p masterkey /var/lib/firebird/2.0/data/kts.fdb | tail -n2 - | tr -d '[:space:]')
    
    query="update HTTP_SERVER set SSL_CERTS_ID=${NEW_ID} where SSL_CERTS_ID!=0;"
     echo "$query" | isql-fb -u sysdba -p masterkey /var/lib/firebird/2.0/data/kts.fdb
    
    /opt/kerio/operator/bin/regenerateConfiguration

    This adds the letsencrypt cert to the database and sets it active. Note that you’ll have a new cert in the database with each run of the cron job, and you’ll eventually want to clean out old ones. Some work could be done to check that the certificate has changed before running the update-kerio.sh script.

Lark: Gentoo in the Windows Subsystem for Linux

Why?

Microsoft’s recent introduction of the Subsystem for Linux (awkwardly called ‘Bash on Ubuntu on Windows’) had me intrigued from the day of its announcement.

Though it’s a transparent attempt to keep developers from leaving their Windows environments behind in a world now focused on development for UNIX-like platforms, and though I’m not particularly interested in supporting such an agenda per-se, I find the notion of a new NT kernel subsystem capable of handling Linux syscalls exotic, and so I had to subject it to some stress testing.

Test 1: Gentoo Stage3

The first test was simply to see whether or not I could unpack a Gentoo stage3 tarball and replace the Ubuntu rootfs with it. This took the following form:

  1. Download the latest stage3-nomultilib tarball within the default Ubuntu environment. I used wget to do this, and I was in root’s home directory after a ‘sudo su -‘ so that proper filesystem attributes and permissions would be applied to the extracted files when I performed the next step:
  2. Unpack it into a directory I named rootfs_gentoo
  3. Exit all open ‘Bash on Ubuntu on Windows’ shells
  4. Using Windows Explorer, cut the rootfs_gentoo folder (as mentioned in 1. this was in root’s home directory – in the Windows environment, this is located at \Users\<me>\AppData\Local\lxss\root) and paste it in \Users\<me>\AppData\Local\lxss\
  5. Rename the existing ‘rootfs’ containing the Ubuntu install downloaded by Microsoft
  6. Rename my new ‘rootfs_gentoo’ folder to ‘rootfs’

I also had to make sure the Linux subsystem opened a shell as root, as this new Gentoo environment had no users created just yet. This was accomplished with:

C:\ > lxrun /setdefaultuser root

In an elevated command prompt window.

So far so good. I created a user for myself, added it to wheel, set a password, and ran the same command above to set the Linux environment to use this new user.

I also needed to manually update /etc/resolv.conf in order to perform DNS lookups.

Test 2: Rebuild @world

I wanted to quickly strain WSL’s capabilities. I made sure /etc/portage/make.conf was configured so that ~amd64 packages would be installed, and then:

# emerge gcc -u 

# gcc-config x86_64-pc-linux-gnu-6.3.0

# . /etc/profile

# emerge world -eav --keep-going

Impressively, around 200 packages were rebuilt from source using the latest GCC (elucidation for those unfamiliar with Gentoo) without an issue. Things are getting serious.

In an upcoming post I’ll discuss how I built on this basis to launch Gentoo’s OpenRC init system, and use that to run services like SSHD.

OpenStack: Fix Security Group Enforcement on Compute Nodes

I discovered the instances in my home lab were not being protected by the neutron security groups I’d set up for them – what followed was a week-long odyssey to discover the culprit. IPTables rules were being put in place, and every other aspect of Neutron networking was just fine. Making things more mysterious, a test deploy on my own desktop, with the host running Fedora 23 and the containers running CentOS 7, did not manifest the issue.

A handy diagram from the excellent “OpenStack Hackspace” – click image to check it out.

Security groups are applied to the tap interface closest to the virtual machine, shown above (not mine – click the image to see the rest of that excellent tutorial).

It turns out there’s a pair of kernel tunables that govern whether or not iptables rules are applied to  interfaces that are members of a bridge, as in this instance. It also turns out that Red Hat, for some reason, toggled the default value of this tunable to 0 after RHEL 7 was released.

These tunables, and their required values, are:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

And this needs to be set on the outer host, if you run your OpenStack cloud in containers. If not and/or otherwise, you need this set on your compute nodes as this is where security groups are set.

Hope this saves someone a week of troubleshooting! 🙂

OpenStack: Dedicate Compute Hosts to Projects

Use case: We want to deploy instances on a particular set of compute hosts because of their special or specialized capabilities.

On the API server(s) ensure that the following scheduler_default_filters are set: AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation

Dedicate Compute Hosts To a Specific Project

Create a host aggregate:

$ nova aggregate-create DedicatedCompute
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host1
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host2
$ nova aggregate-add-host DedicatedCompute dedicated-compute-host3

Add a key=value pair to the host aggregate metadata – we’ll use this to match against later.

$ nova aggregate-set-metadata DedicatedCompute filter_tenant_id=<Tenant ID Here>

We are here using the AggregateMultiTenancyIsolation filter. If we stop here, only members of the specified tenant will be able to create instances on hosts in this aggregate – but instances will also spawn on any other host that is either not in an aggregate, or has no filter_tenant_id metadata set. We want to isolate these hosts to a specific project.

Isolate Hosts To a Specific Project

We do so by Creating a flavor and giving it specific metadata:

$ nova flavor-key m1.dedicated set aggregate_instance_extra_specs:filter_tenant_id=<Tenant ID Here>

We are here invoking the AggregateInstanceExtraSpecsFilter filter. Note a couple of things:

  1. We’re filtering on the filter_tenant_id= tag we applied to the host aggregate above. This is a convenience – we could have set another arbitrary key=value pair in the host aggregate’s metadata and used that to match against here. This is conceptually important for the purpose of understanding how the two filters work – they don’t work together, we just happen to be using the same tags.
  2. The format of the above is very important. If you specify this in any other form the ComputeCapabilitiesFilter will try to match the resultant tag to a host and fail to start an instance with that flavor. This can make troubleshooting interesting – I had to walk through the code path of the nova scheduler and the filters to find this out. Fun!

Isolate Storage To a Specific Project

In this project’s case we want a specific storage pool, itself dedicated to a specific set of hosts and disks, available for use by instances in this project – but not other projects. We have created a volume backend called ‘elasticsearch’ that points to this storage pool, and will now create a Cinder volume type that makes use of it.

$ cinder type-create dedicated
$ cinder type-key dedicated set volume_backend_name=dedicated

We start by ensuring that all other projects will not be able to use this volume type:

$ for project in `openstack project list -f value | awk '{print $1}'`; do cinder quota-update --volumes 0 --volume-type=dedicated $project; done

We then grant a quota for this specific volume type to our special project:

$ cinder quota-update --volumes 100 --volume-type dedicated <Tenant ID Here>

References:

  • http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
  • https://bugs.launchpad.net/openstack-manuals/+bug/1445285
  • http://www.hitchnyc.com/openstack-multi-tenant-isolation/

Importing an OpenStack VM into Amazon EC2

Some quick notes for those interested:

  • Install EC2 API TOOLS
  • Set Access and Secret key environment variables:
    • export AWS_ACCESS_KEY=xxxxxxxxxxxxxxx
    • export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  • Set up an S3 bucket and secure it to taste, to be specified later
  • If exporting from OpenStack, remove the cloud-init package
  • Note all prep considerations on http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/PreparingYourVirtualMachine.html
  • Export the VM image – I’m using OpenStack (KVM) images in raw format, so these needed no initial conversion. You can also use VHD or VMDK
    • Docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ImportingYourVM.html
    • Command I used: ec2-import-instance ./my-awesome-image.img -f RAW -t c4.xlarge -a x86_64 -b my-awesome-bucket –region us-east-1 –availability-zone us-east-1a –subnet subnet-xxxxxx -o ${AWS_ACCESS_KEY} -w ${AWS_SECRET_KEY} -p Linux

You need to take the destination region into account – your subnet and S3 bucket be available to the region/availability zone you specify above.

The import takes time after the upload is complete – watch progress with ec2-describe-conversion-tasks.

Privacy Helper for Windows 10

During my evaluation of Windows 10 I’ve cobbled together a script that disables most known anti-privacy features in the new system. It also removes the unnecessarily installed default Modern apps. The script was designed with a professional environment in mind, but applies equally well to home users, and most settings will apply to all editions of Windows, not just Pro/Enterprise

It’s alarming but not surprising that we’ve gotten to this point – Windows has always served the agenda of its maker before that of its user. This edition ships with a significant set of end user facing UI improvements, but its goals are not different. They simply take advantage of the current state of the art and the current anti-privacy climate.

The script can’t be considered complete as not all anti-privacy features are documented by Microsoft, and future updates will doubtless add more.  list of changes – comment each section of the script out if one of these is undesired:

  • Stops / Disables intrusive diagnostics services
  • Removes diagnostic scheduled tasks
  • Removes OneDrive
  • Removes access to the Windows Store
  • Removes access to Cortana
  • Block connection to Microsoft Accounts
  • Disables WiFi Sense (what’s this?)
  • Disables Windows Update peer to peer (what’s this?)
  • Requires Ctrl-Alt-Del to log on
  • Blocks “Add features to Windows 10”
  • Removes unnecessary apps (they can be reinstalled from the Store if desired, assuming you’ve left it enabled)

https://github.com/brad-x/Windows-10/

Standalone – an extension for Mozilla Firefox

Found this little gem today, thought I’d mention it on my corner of the web.

https://addons.mozilla.org/en-US/firefox/addon/standalone/

Creates site specific apps, like Prism used to do – for Chrome users, this is the equivalent of “Create Application Shortcuts” which I find indispensable personally.

The strange thing is, it’s been around since 2013. How did I miss it for that long? I’ve been looking everywhere for something like this.

Thoughts on Docker

I like the concept of Docker and containerization in general, but I have some pretty fundamental concerns:

Thought experiments:

  • How many deployed docker images were torn down and redeployed upon the revelation of heartbleed? Of shellshock? In practice, not in theory.
  • How many Docker images are regularly destroyed and redeployed for the purpose of updating their userlands? Again, in reality, even with the most agile orchestration.
  • How many Docker images are actually deployed with a minimal attack surface, that being only the executables and libraries they need, rather than entire userlands?
  • How many Docker images are given to IT/Ops as single filesystem images rather than multi-gigabyte change layers, contributing heavily to wasted storage space?
  • How can Docker images composed of random people’s aging Linux userlands ever be taken seriously in an environment that needs to be kept certified, stable and secure?
  • What is the benefit of Docker given the above, when LXC and Libvirt-LXC performs the same containerization and provides Ops with much greater flexibility in terms of orchestration and change management, and has for years?
  • Dan Walsh of Red Hat has much to say about the security of Docker and LXC containers – the most important statement he makes is that “containers don’t contain” – containers provide no security, they are only useful for the purpose of deploying applications in a manageable way. Given this, is it responsible to use containers based on full Linux filesystems? If you do, you’d better be ready to tear down your ENTIRE stack each and every time a major vulnerability comes to light.

Points worth pondering – these affect the future direction of container technology and shed light on the implications.