The Linux propaganda/FUD machine

Lately I’ve taken note of a dark trend in the Linux world, and looking back it’s one that has been present throughout the history of Linux advocacy. Linux users commonly mention that large corporations spread propaganda about the superiority of their products and FUD about competing products in order to get ahead. This sort of trickery is pretty widely regarded as low on the ladder of civility, but I’ve found it to exist in spades in many open source communities.

Two main incidents currently in play come to mind.

Picasa

The first and most recent is that of the release of Google’s Picasa for Linux photo management application. While hailed by some as a step forward for the Linux desktop, other voices dissented for puritannical reasons.

Namely, Picasa was ported to Linux using WINE, a subsystem which provides Win32 API compatibility to Windows software running inside a Linux or UNIX based operating system. Curiously, many Linux users denounce Windows compatibility environments as an impurity and refuse to even try Picasa because of this.

Additionally the developers of competing native Linux applications such as F-Spot and DigiKam denounce Picasa, appealing to both the WINE argument and their own argument that their software trounces Picasa feature for feature [1] [2] [3].

Here’s the problem:

Linux users refusing to use something for idealistic reasons are forgetting that compatibility environments have long existed to facilitate migration from one environment to another, and have even aided the ascension of operating systems like Windows and Apple’s MacOS X to prominence. They’re also refusing to provide themselves the benefit of well-designed and elegant new Linux software for an abstract and superstitious reason. And this is fine, except for the fact that I’ve seen many of these types of users actively attempt to dissuade others from trying Picasa. This is harmful to the cause of getting companies to adopt Linux as a supported platform and works against the welfare of the end-user.

This places it squarely in the category of FUD designed to appeal to emotion and the misplaced fear that Microsoft code running under Linux exposes it to the same vulnerability and instability.

Concordantly, developers of competing applications making the claim that their offerings are superior have little basis in fact[4]. While F-Spot may ostensibly maintain a similar featureset as it’s the same type of application, it’s neither as feature rich nor as stable when subjected to real-world use. And while DigiKam may have feature-parity with Picasa it again is nowhere near as stable during usage, frequently freezing up and presenting the user with baffling dialogs.

This qualifies the claims coming from developers of these competing applications as propaganda designed to falsely represent their applications as superior or equal.

MacOS X

Another example is a recent posting claiming Linux to be considerably faster than MacOS X at many scientific functions. To the uninitiated this places Linux a cut above, but there are a couple of things to keep in mind.

The first is that the conditions and preparations made for the benchmark clearly favored the Linux setup. This became evident when several bloggers and news editors pointed it out, and the original poster was forced to republish corrected results demonstrating no lead at all.

Second, while Linux may demonstrate faster response times in some conditions, it has a lot to learn on the usability front. No one is going to want to deal with the present state of the Linux desktop. That much is self-evident in the types of applications in which Linux is currently most popular: embedded software and server software, where the operating system has no direct interaction with the user.

These facts come together to represent Mr. Jasjeet Sekhon’s comments as both propaganda and FUD, designed to falsely represent Linux as a superior choice and scare people into believing that the newly repopularized competition is lacking.

Propaganda and the appeal to fear are commonly used by those who have no other valid arguments to make. They are sinister and dangerous, and eagerly used both by corporations and volunteers/hobbyists such as members of the open-source community. I’m sad to say, these seem to be human traits, and not just greedy corporate ones.

How about some honest competition from the grassroots?

Why it would be easy for Apple to develop the mythic Red Box.

While reading my regular list of news sites this morning I ran across an article detailing the improbability of Apple releasing a Red Box. To those who are unfamiliar, this is a long-rumored compatibility environment in which Windows applications would run alongside Mac programs, much as Blue Box ran Classic Mac applications within the new MacOS X.

As I gave this some thought, I found that a red box environment could be built out of existing technologies with surprisingly little effort.

Consider MetaVNC (http://metavnc.sf.net/) and Citrix Metaframe (http://www.citrix.com) as display components, which already allow one to project individual Windows applications onto the desktop of another computer, even running another operating system.

Consider also VMware and Parallels Workstation, virtual machine software that easily allows Windows to run alongside foreign operating systems.

And finally, consider the inherent flexibility, configurability and scriptability of Windows XP. Registry settings and policies can be set so that the Windows XP home directory is the same folder as the Macintosh home directory (mounting it as a network share on login), and many other settings can be configured to hide aspects of Windows that would otherwise indicate that it’s running as an independent virtual computer system. As one example, C:\ wouldn’t have to be viewable by the user – all the user’s Windows programs would be on the Mac hard drive, available to the Windows environment through another network share. The user would only see his home directory and any other directories fit to complete the “Mac experience” through a Windows application’s open/save dialog boxes. 

What you’d then have is a set of technologies that, like Classic, allow a licensed copy of Windows, either bundled or provided by the user, and then modified by an Apple installer to load in the background, display applications on the Mac desktop alongside Macintosh applications, and have access to many of the same filesystems and resources that native Mac programs have.

This is all easily accomplished with existing tools. I do hope Apple gets on board with it someday. Sooner than later!

Also, I know Apple is reticent to implement ideas other people have put forward. This is presumptuous of me, but I don’t want to see legal concerns get in the way of a good idea. Apple, if you do get any ideas out of this I hereby grant all my rights to this idea over to you and reserve no legal recourse for myself to take a profit from any products you implement from it.

My Linux killer app: Novell Evolution

As I’ve been using Linux I’ve explored many varieties of software and many desktop environments to see which one had the best fit for me. I eventually began to see that the GNOME desktop environment was becoming the dominant player, and would probably represent the mainstream future of open source desktops.

GNOME is a great environment to work in. Uncluttered and orderly, it presents the most common tasks available to the user and keeps the less common ones hidden but accessible. This makes working with the computer a nice, smooth process that proceeds with as little interruption and background noise as possible. One of the flagship applications of the GNOME desktop, and indeed because of its uniqueness, the Linux/UNIX desktop, is Novell Evolution.

Novell Evolution is a personal information manager. If you’re familiar with Microsoft Outlook or Lotus Notes, you’ll be right at home with Evolution. It allows the user to check their e-mail, maintain a calendar of events and an address book containing e-mail addresses, phone numbers and other contact information one sees fit to store. It even plugs into the rest of the desktop, so that one can use the Evolution address book within other applications to contact members with entries in it – such as being able to speak to friends through the Instant Messaging application once added to the Evolution address book.

One of the most compelling features of Novell Evolution is its flexibility. It can connect to numerous types of mail servers offered by many different office environments and Internet providers. It even enables Linux users to connect to Microsoft’s ‘Exchange’ server, a server designed to store one’s address book, calendar events and e-mails on a remote machine which can be accessed from anywhere – another computer, a phone, a PDA, you name it.

And so, Evolution is a crown jewel in the open source world, offering unique functionality not seen in any other freely available application. But there’s only one problem.

It never works.

Throughout Novell Evolution’s history, it has matured a great deal. Before version 2.0, the client was unable to properly handle the placement of IMAP folders on the mail server, causing a user’s e-mail to disappear and become inaccessible if Evolution was used even once to check it. As Evolution has matured it has become more and more usable, but glaring and lingering issues are always present in the releases its developers unleash on the world. Most prominently, the Evolution Exchange compatibility plugin never functions properly with each new release. An average of four months passes between the release of a new version of Evolution and the recovery of one’s ability to check their Exchange mail / calendar / address book with it.

Other releases of Evolution have not shipped with the ability to log into the mailserver (somehow passing the login credentials was not considered a required feature of a mail client). Random crashes and lockups requiring a forced restart of the program abound. Repeated entries on Novell Evolution’s bug reports page go unanswered and unsolved by an apparently uncaring development team. Anyone looking to Novell Evolution for office use will be sorely disappointed to find an application with an impressive feature list, and and equally impressive list of failings.

Most surprising is that within the last three years many of the developers and maintainers of Evolution were employed by Novell corporation, and the product was branded a Novell product. Novell corporation touts Evolution as a selling point of their Novell Linux Desktop, a Linux based operating system designed around SUSE Linux, which they also bought. While this high praise and promotion issues forth from Novell, no priority appears to be given to the functionality of the program itself.

Evolution itself isn’t a terrible program. It’s the developers and maintainers that currently have stewardship over it who are causing the problem.

Documentation on the use and configuration of the program is poor, especially in the area of connecting to Microsoft Exchange. This very documentation presumes the user is running Evolution on Novell’s own version of Linux, an assumption which has frightening implications regarding Novell’s attitudes about Linux (can you say “cash grab”?). Information on the Evolution homepage is sparse. The developers fail to proactively fix bugs as reported and do not do sufficient testing of the application in as many use scenarios as possible, leading to a program which is fragile and may not work as expected from one version to the next, and is not even predictably functional from one distribution of Linux to the next.

I asked myself what the Evolution development team’s response to this blog entry would be, and the obvious response they’d have is “contribute fixes and documentation updates, and be sure to file bugs whenever you find a problem”. This is due to their systemic lack of awareness that the program is designed for a wide and non-technical audience, and that it’s their own failure to adopt proper development / quality assurance methodologies that cause these problems.

Do I think Novell will step in and clean up their acts? Given Novell’s history of poor end-user software design, I don’t have very many hopes in this regard. Red Hat has the best chance of getting fed up and fixing the problems that exist in Evolution (they have, as yet, not), but Novell has taken such steps to brand Evolution as a Novell product that Red Hat may not be interested in fixing an application largely developed by a corporate competitor.

To recap: Novell Evolution looks great, and in theory it would work great if it weren’t maintained by pinheads. I just want to check my e-mail.

Dual Booting is Yesterday – Virtualization is Tomorrow.

I have to say I’m not optimistic about Apple’s recent decision to support dual-booting their new Macs. That’s the sort of sophomoric corporate blunder that can capsize even the healthiest of businesses.

In short, I think it’s going to backfire.

Companies aren’t going to spend millions of dollars porting software between two operating systems when Macs can now natively run both. The fiscal bottom line doesn’t care whose widgets are prettier and whose user interface is more pleasant overall.

End-users aren’t going to spend the time learning a new operating system simply because they don’t have the time.

This means that Mac users and those few who actually do take the effort to switch, the main audience Apple is serving by selling Macintosh computers, will face a slowly dwindling selection of native Macintosh software, forced to run more and more Windows applications until they find themselves in the Windows environment almost all the time.

In the meantime…

A better way to have shown people that MacOS X is compatible with the world of Windows would have been to bundle a virtual machine, or to promote virtual machines as the answer. A dual-core Mac is more than powerful enough to run business applications within a virtual machine, especially given the UNIX underpinnings of MacOS X and the fact that its scheduler would allow a guest operating system to scream.

Gaming, you say? Gamers are in their own world, and I can’t really say I care what they want. Dell/Alienware can keep them.

Benefits of dual-booting:

Slightly higher performance in some cases.
I was going to add “greater support for peripheral devices”, but this is a demographic issue and I can’t think of many peripherals Macs don’t natively support already.

Drawbacks of dual-booting:

Windows XP can’t access the Macintosh filesystem without a third party application called “MacDrive”, which I really wouldn’t trust with my data.
MacOS X can read Windows XP’s section of the hard drive, but is unable to save changes to it (read-only).
Wasting time waiting for the computer to restart
Can’t use applications from both operating systems at once

Benefits of a virtual machine:

The second operating system runs at native speed, with no penalties usually associated with emulation.
The user is able to exchange files and data between both operating systems in real-time, including moving and copying files between both environments, and copying and pasting images and text between them.
Any number of operating systems can be run at the same time, limited only by how much memory the host computer has.
No time waiting for the computer to restart.
Can use applications from all operating systems simultaneously, and exchange information between each.

Drawbacks of a virtual machine:

Each operating system requires a sizeable amount of memory. A computer running two operating systems simultaneously should have a minimum of 1 gigabyte of RAM, to accommodate the current average of 512MB per OS. Once Windows Vista comes out, a virtual machine hosting it will probably need from 768MB to 1.5GB of RAM all its own, meaning 2 to 3 gigabytes of RAM will be recommended for the host computer at that time. The upside? Memory is extremely cheap these days, and a gigabyte of RAM can be obtained for as little as $100 ($80 USD).
Not all things can currently run practically in a virtual machine. A good example of this is video games making heavy use of 3D acceleration features present in modern video cards. These types of games rely so heavily on the layout and behavior of a native computer that a virtual machine would not play them well at all. Like I said, I don’t consider this a problem, because most gamers aren’t going to care to buy a Mac for its native OS anyway, and would save money building their own computer.

Aside from this, a virtual machine is the way to go. Computers are now powerful enough to comfortably run not one, but two operating systems at once. Intel and other CPU manufacturers are responding to the rising demand for virtual machines by adding virtual computing extensions to their processors, enabling the next generation of computers to even more effectively run multiple operating systems at once. Businesses are beginning to use virtual machines to consolidate multiple aging servers into a single, powerful machine hosting each of the old ones. Corporate desktops requiring high security are starting to see each application running in its own virtual environment, protecting all others and the rest of the computer from a virus attack.

So given these obvious advantages, and the obvious capability of Apple’s computers to virtualize rather than having to dual boot, why would Apple respond with a method that inconveniences the user and could seriously harm the viability of MacOS X?

Heaven knows.

Why Red Hat is doing everything right

Since beginning to use Linux I’ve learned to tolerate a lot in the name of free software. Unpolished programs, non-working features, ugly user interfaces – these are the price to pay for freedom from proprietary operating systems designed by corporations.

Bit by bit though, I lost patience with Linux. It became obvious that a lot of these shortcomings and lack of polish weren’t a result of software application development being overwhelmingly complex. The open source model is supposed to tackle this very issue by exposing many people’s talents to a single task.

Most of the shortcomings that prevent Linux from reaching critical mass right now are easily solved, but go unsolved due to differences of opinion and other forms of disunity.

To succeed, Linux needs to be a platform, not just a kernel or an operating system.

Linux is famous for having countless distributions surrounding it, each with their own maintainers, specific feature-sets and unique traits. While many open-source hard-liners view this as a terrific thing, it’s not. It harms the appeal of free software because there’s no single product to support. A platform is more than a collection of programs and a kernel – it’s a standard set of parts for users and developers alike. Everyone using every computer running Linux should be able to expect the same parts, the same user interface, the same everything.

Red Hat accomplishes this.

But how can we say it accomplishes this when it’s one distribution among many? Simple. Fedora and Red Hat are backed by a company which develops Linux for commercial use. Many Red Hat innovations have become de-facto standards. Everyone knows what an RPM is. Everyone knows GNOME is their desktop of choice. Most importantly, Linux application developers almost always have their programs shipping for Red Hat. Most commercial applications designed for Linux are built using GTK+, the graphical foundation of the GNOME desktop backed by Red Hat.

Red Hat has achieved this by realizing early on that while open-source ideals are compelling, more than ideology is required to thrive in the capitalist societies which comprise much of the world today. They set out to market Linux as a solution that was useful to businesses, and to build a business around Linux itself. It is this business and its promise of both ongoing support and consistent design which creates a platform.

Unfortunately, Red Hat is loathed by many for this. They’re viewed by many Linux devotees as sell-outs, attempting to turn Linux into a closed proprietary system. This is quite an unrealistic point of view. In reality, Red Hat walks the line between the corporate world and the open source world, to the benefit of everyone.

For Linux to succeed, its developers can’t be afraid to make decisions not everyone agrees with.

Design by committee is impossible. Individual preferences are too varied, and you can’t make everyone happy. Unfortunately, many parts of the open source community attempt to do just this, which leads to many projects popping up, each with the same goals but a slightly different vision of how to achieve them.

As a result, there are many desktop environments including KDE, GNOME, XFCE and GNUstep, each of which aim to create a complete, polished and easy to use environment for end-users. Each of these environments have unique strengths the others lack, and some are closer than others to their goal. SUSE, Slackware, Gentoo, Ubuntu and Mandrake are five examples of the myriad distinct Linux distributions available to the often befuddled first-timer. In the end for each example, the reality is that the creativity and development manpower is split 5 ways because none of them want to give up their own preferred methods and get to the unglamorous task of working toward a common goal.

Red Hat gets this done.

Very early on, Red Hat began to make decisions as an organization about which open source projects it was going to focus on. GNOME became the default desktop, marginalizing KDE and the others.

Red Hat also made choices about which applications it was going to ship and support as “flagship products”, while marginalizing all the other options to an unsupported “extras” category with no guarantee of functionality.

The design choices Red Hat makes regarding the base system are also unique in the open source world. Areas including the installation program, the package system and many aspects of the filesystem layout (the locations in which system files are placed on the hard drive) are implemented in Red Hat and Fedora with efficiency and integration in mind.

And with a minimum of the typical open-source peer review.

This simple omission is reason enough for many members of the open source community to vilify Red Hat, and to claim that they’re eroding open source ideals for the purpose of making money.

This is patently silly, and the reasons for not consulting a community full of discordant and disagreeable people are simple.

The open source community is unable to efficiently make sweeping design decisions.
The open source community is unable to set ego aside and work toward a common goal.
The open source community is not able to identify the needs of its target audience.

Red Hat and its user-base are separated from the open source community at large by the gift of practical thinking, which brings us to our final point.

For Linux to succeed, its developers must understand the limitations of the open source model.

Red Hat Linux and Fedora Core are designed from the point of view that open source and free software are righteous ideologies that free computing from arbitrary locks and keys and open up opportunities for learning and productivity to people who would otherwise never have them.

They’re also designed with the understanding that simply being open source doesn’t make software excellent. Leadership, organization and common goals are required.

Consistent design and attention to detail are apparent when using Fedora Core, from its stable development platform to its administrative tools, it’s clear that a great deal of effort was spent polishing the rough edges of free software to create an operating system that’s not only functional, but enjoyable to use.

Red Hat’s corporate support and consistent design make it a favorite for commercial application vendors, who often support their software on Red Hat’s operating systems.

While many Linux users view the use of proprietary operating systems designed by corporations with disdain, they should warm up to the fact that sometimes a corporate atmosphere can be of great benefit to the development of free software. Red Hat is at the cutting edge of mainstream adoption of free software.

For Linux to succeed, its adherents need to take Red Hat’s example.

A Linux switcher’s upcoming tale of tragedy

Just read an article commenting on the professor who switched one of his labs to Ubuntu Linux; this is something to keep your eye on, as I’m positive within a year if you check back we’ll find the lab has been switched back to proprietary software.

Many times people invoke the principles of free software as a good reason to switch from proprietary solutions, not realizing that paid professionals do indeed take the time to make higher quality, more usable desktop and workstation software. The reputation UNIX gained (and which Linux is now trying to usurp) for utmost reliability and rock solid stability come from the fact that it was developed by giant corporations who paid their programmers a great deal of money.

Having spent years in the open source world of Linux myself, I can say with some authority that most people who equate the usability of the current Linux desktop with MacOS X or Windows do so because they lack a fundamental understanding of what usability really means, and this is chiefly because they’ve spent all their time using Linux to the exclusion of all else for ideological reasons.

This is not to say Linux won’t ever be at the same standard of quality as its proprietary counterparts; but before it can be, its developers and its current user base have to be more objective, open-minded and willing to do the non-glamorous work of fixing bugs instead of introducing new features and improving usability.

Running Windows on a Mac the right way

I came across this forum thread in my daily browsing; it details an effort under way to enable virtualization in the open source Qemu emulator using MacOS X as the host operating system.

What does this mean? Instead of having to dual boot, this will allow Windows and other Intel based operating systems to run at nearly native speeds within MacOS X itself. This is especially useful for those who still need Windows for those one or two oddball applications they may need to use at work (IT departments are famous for cooking up their own little proprietary things) or in some other setting. I myself keep Virtual PC on my PowerBook in case I want to test out Windows networking while I’m on a service call.

But what about applications that require higher performance? A virtual machine won’t have the ability to run in accelerated graphics mode, which means no 3D gaming or other very high-performance time sensitive apps (3D acceleration may become possible in the future – it all depends on whether someone gets started on it). Like I said before, if you’re that into gaming and there’s no Mac version, buy or build a PC. It’s the simplest most practical solution.

If you’re doing other high-performance computing which requires Windows, well, why are you doing that? 😛

Afghan on trial for being Christian

This BBC article should make everyone think for a bit about religious tolerance and what it really means. Human beings have lived on this planet for 10,000 years, and one would expect that we’d have gotten past such primitive behaviours by now.

“We will invite him again because the religion of Islam is one of tolerance. We will ask him if he has changed his mind. If so we will forgive him,” the trial Judge is quoted as saying in the article. However the court system in Afghanistan follows Sharia law, the laws set down by the Islamic faith. Under this law, if a man converts from Islam to Christianity, he is to be put to death.

Welcome to 2006 ladies and gentlemen.

Advocacy gone wrong

I read an article yesterday advocating desktop Linux, and immediately had issues with the credibility of it.

The fact that it bears the word ‘pwns’ in its title is the first sign of trouble, but I decided I’d try and see if the author had valid points to make. A few problems.

The article begins with the usual fact that Linux is more secure and less vulnerable to attack than Windows, as proven by several studies and contests. I won’t argue this as it’s clearly true. Where the author almost immediately goes wrong though is making the complaint that to bring a Windows system to a secure state requires “six extra hours to tweak every registry setting, install antivirus, spyware/adware software, set up email scanning, and all that,” in his words.

I must ask the question, how long does it take to configure a Linux system to even function as a desktop operating system? The author partially answers the question himself in the next point. “With Linux, you may have to actually invest some time up front to get stuff to play nicely,” he relates.

This is just the start of a directionless rant on how great Linux is and how much Windows sucks. I couldn’t determine who the target audience was supposed to be, or for what main purpose the author was advocating the use of Linux (although one assumes it’s for use as a desktop operating system, as per the name of the website).

I gave this some thought after having read the article and came to a couple of interesting conclusions.

First, the article shows a lack of understanding of the issues surrounding the design of a desktop OS. The operating system has to be designed for use by its intended audience, you can’t expect the audience to break out a UNIX manual to figure out how to turn up the sound volume or install a webcam driver (and no, you can’t make the excuse that it would be pre-configured by a technician. What if Jimmy or Sally bought a webcam or a printer on their own?).

Second, it’s this lack of understanding of the requirements of a desktop OS that permeate the open source community and currently prevent it from achieving success on the desktop. But this isn’t just a point I came up with on my own, it’s a well-known issue. The Linux world needs one desktop environment, one development environment, and one way of doing everything. It needs to get past the idea that every developer has the right to go his own way and play in the same sandbox as everyone else. The community only hurts itself.

An article in the Economist, referenced by slashdot points this out. Only those open source projects with a sense of direction ever produce anything worthwhile. People who work as a team and have common goals, and who are more interested in realising an end result than having their own personal preferences satisfied.

Corporate involvement with the Linux desktop (Novell, RedHat, IBM) are beginning to fix these problems, and that’s a good sign. Hopefully the rest of the open source community (and people like the author of the “pwns” article) see the light and help out instead of continuing to hurt their own cause.