Category Archives: Tech Stuff

USB HIDs in VMware Workstation and Fusion

Have you come across the need to a VM on, say, your laptop, and forward USB devices into it? Sure, and not a problem you might think. But when it comes to USB HIDs (Human Interface Devices), such as keyboards, signature pads, or prox card readers you hit a wall. However there is a little-known, incredibly simple trick to make it work:

Open up your favorite text editor and edit the .vmx file and add this:

usb.generic.allowHID = "TRUE"

Restart your VM and you will be able to connect these devices! We use this for demoing or testing Imprivata which NoTouch Desktop running in a virtual machine. Very handy! Works in both VMware Workstation and VMware Fusion.

Unbiased Thin Client Market 2017 Q4 Overview

While my company Stratodesk is not directly active in the Thin Client market, we have connections and knowledge to share. In general, the market hasn’t changed a lot over the years. Thin Clients are still Thin Clients, small boxes without a fan and no spinning hard drive. The latter used to be extraordinary, with advance of SSDs in normal PCs it is not unusual any more. Thin Clients serve as endpoints to the VDI protocols, such as Microsoft’s RDP, Citrix ICA/RDP, VMware Horizon’s Blast and Teradici’s PCoIP. That said, these little boxes always need a server that runs the actual Windows OS, the actual applications you are working with. They do only little to nothing on their own. Some models can run a web browser, some don’t but that’s about it. Otherwise, it wouldn’t be considered a Thin Client.

Dell and HP are the companies that produce the lion’s share of Thin Clients today. Dell had acquired Wyse a couple of years ago and integrated the product lines into their own portfolio with some disruptions to the sales channels. HP is undergoing a lot of re-orgs. Both are right now, sadly, not doing a great job in sales and marketing – just a little thing, but telling! – the Google Adwords link that shows up when you search for Dell Thin clients goes to a generic overview page that doesn’t even mention Thin Clients. Unacceptable in a well-managed organization. While their hardware is still excellent, there doesn’t seem to be any innovation on the software side. (Of course everyone in the field is now claiming to have software innovation. Well, that’s the marketing.)

Speaking of Thin Client, you will probably not have heard of Clientron. Clientron is the Chinese manufacturer that makes most of the Thin Clients, especially the “b-brands”, smaller, regional manufacturers. Not all, but most. It is funny see to how great extents these companies go to conceal that, you open the box, boot Linux, run dmidecode and it tells you “Clientron”.

Now who are the runner-ups, those who attack the big ones and try to benefit from their weaknesses? One company to watch out for is 10ZiG, an American company based in Phoenix, AZ with a wide product portfolio and a good track record. Also American is Devon IT who had various OEM partners in the last years. On the other hand, German company Igel (Igel is German for hedgehog which is their mascot) is spending a lot of marketing dollars on taking HP and Dell market share away (Tesla giveaway) and has managed to build a name here quickly. Let’s see how this will play out.

I don’t think many of them will be around much longer, the market will consolidate a lot in the next years. Some vendors are clearly out to be acquired (more about that in a future article). On the customer side – Who, who is serious in their mind, can spend $400 on a rebranded imported box with little functionality? The Zero Client was an interesting alternative but it seems not have received traction outside of VMWare who have switched to their own Blast protocol. Made by Teradici they aimed to do all the decoding in hardware. I liked the idea, but I didn’t like their marketing as saying “it has no OS” because that is obviously not true. If that was true, how do they explain the availability of firmware updates? Right?

The most promising part of Thin Client alternatives is the Raspberry Pi, especially the Citrix HDX Ready Pi initiative. Citrix has created an open ecosystem for hardware companies to join and get into this market. Right now, ViewSonic and NComputing are selling Raspberry Pi based Thin Clients – the SC-T25 and RX-HDX, respectively, they cost less than a third and can do the same if not more: Fully managed, enterprise ready, decoding in hardware (Raspberry Pi has a H.264 hw decoder), it combines the advantage of Zero & Thin Clients and goes for hundred to $120 dollars street price. However, I need to stop here, this is supposed to be an unbiased review, and I do have something to with Raspberry Pi.

What else? The Intel NUC is a promising alternative. It is way faster, more elegant and cheaper than any Clientron/younameit Thin Client. NUCs are sold as barebones for now, not exactly enterprise ready by itself, but a good opportunity for local resellers and assemblers to step in. You still need an OS and management solution, I’d love explain the benefits of NoTouch Desktop, but since this is my creation I am obviously biased. Read what Gabe at is saying about NoTouch. Also, you can get the Intel Compute Sticks.

Chromebooks have been hyped a while as potential Thin Client alternatives, but honestly, apart from some segments in education I don’t see them around too much. The apps are still not there. If you aren’t all-in for Web only and G Suite, they provide little to no value.

Summarizing – Even though in the Thin Client market in the narrow sense shows little movement and is still a niche like it has been a decade ago, there is a lot of innovation around this. Citrix, being the recognized ecosystem leader, is still innovating – a lot actually! Developments around NUC, Compute Stick and especially the part where I’m biased, the Raspberry Pi – this is what I consider the most exciting development of all.

Disclaimer: As the Founder&CEO of Stratodesk and lead architect of NoTouch Desktop, our PC repurposing/Thin Client conversion product, I have seen a lot of Thin Clients come and go. While I had originally intended NoTouch as a PC product, it was surprising to me how many people actually used NoTouch to upgrade their Thin Client because people where unhappy about the builtin software; that said I don’t consider these “legacy Thin Client manufacturers” to be competitors in any way as Stratodesk does not make or OEM-rebrand any hardware. In fact, they can be thankful, we have made lots of HP, Dell, Lenovo, Igel, etc customers happy again by upgrading their boxes!

Apple and the rebirth of the spec

When Apple introduced its new wave of devices, the iPhone, the iPhone 3G, the iPad, it was so refreshingly unspectacular (no pun intended): Apple did not provide “specs”, no MHz, no CPU details, just the obvious flash size in Megabytes that they instantly translated into number of songs and length of movies you could store.

And now – today, Tuesday September 10, 2013, people are crazy about the new A7 processor, its 1 billion transistors, the iOS transition to 64 bit. And it has a co-processor! Do we see a rebirth of the spec, those things that made computing so gory for non-IT-freaks?

What has happened? For us who have made all the generational changes from the 16-bit x86 Real mode into 386/486, the Pentium, from 33MHz (remember what DX2 stood for?) the Apple makeup was indeed “refreshing” as you could buy a gadget that was cool but not nerdy and you were not forced to compare spec sheets (that’s why I still use Apple instead of Android) and the only important number was expressed in dollars and not Kilo/Mega/Giga-something.

Sad thing is: Either it just didn’t work in the long run – after all, computing speed does have an influence on fun. It doesn’t matter if your machine has to spend 99% in idle mode, waiting for the user to make the next finger-tap or keystroke. The moment you need the computer to do something, you want it to happen instantly and so “specs” become a sales argument again. Or, Apple is going back from a lifestyle gadget vendor to a “computer” (Silicon Valley) company. Either way, would be interesting to hear Steve Jobs’ opinion on this.

Windows XP: The best OS ever?

Finally – the end is near for Windows XP. I’m glad to see this coming and even more happy to write about it. There are a few thoughts however that I’d like to shine a light on.

First, what I find most interesting is that we all are so concerned that IT is running at such a fast pace, how can it be that an operating system is still used widely after more than decade? So it seems that also IT sometimes enjoys a stability usually only known from the world of household appliances. I was a young boy at the time, but I still remember MS-DOS 3.30 and I’d consider DOS 3.3 the most standardized MS-DOS for so many years. I remember DOS 4 had issues, DOS 5 was better, and DOS 6 was finally something stable. MS-DOS 3.3 was released in August 1987 (I had to look that up), DOS 6 in March 1993, so DOS 3.3 was living almost six years. I agree that most people would have switched from DOS 3.3 to DOS 5, not DOS 6, so DOS 3.3 had half of the lifespan of Windows XP. I can’t think of any other OS that was living so long in such a widespread fashion as Windows XP.

Windows XP SP3 screenshot
Windows XP SP3 screenshot. Used with permission from Microsoft.
Actual comparisons are difficult if not impossible though. Windows XP was seriously updated by its service packs. Could you compare XP Service Pack 2 to the change from Windows 95 to 98? I’d say so, although I’m for sure not educated on the Windows 95-98-ME series that I tried to avoid – then at university – at all cost. Do you remember anyone seriously using Windows NT 4.0 by the time XP was out? How would you compare different OS X versions with different Windows versions? I believe Microsoft would have considered Snow Leopard just to be a “Service Pack” (sorry Apple folks) as there were little visible changes.

Now why is Windows XP so amazingly long-living? The answer is probably something IT people don’t want to hear: Because it has solved the problems people had, the problems why they bought a PC. Innovation has slowed down, period. Who cares if a start menu is square, rectangle or round? Yes, PCs before the Windows 2000 age were horribly unstable, difficult to use, hardware support was a nightmare (remember the time when printer drivers where considered part of the word processing application?), bad design choices stifled innovation (think 640 KB). But after the year 2000, not only did all those people who were 50+ in the early PC age retire, with a more PC-savvy generation come up, but using a PC had improved dramatically from a usability perspective and PCs were stable. I’d say most of the crashes that happened in the XP were due to faulty hardware, most notably RAM, when people sought to buy the cheapest RAM possible. A newer MS Office version doesn’t allow you to do things that you couldn’t before. I’d say, Office XP was fine. If you can save your file directly to SkyDrive from an office app, oh well, that’s not a game changer. The WYSIWYG word processing, that was a game changer!

Cutting a long story short, what I’m saying is that the basic problems the “personal computer” (not capitalized, including Macs) aimed to solve, are now solved. It is like cars. The very early days of cars were terrible. You had to be a mechanic to operate it. But already in the 1950s cars did just work, much as personal computers now. Did car sales stop? No. Are new cars all about design? No, I agree if one says design is the major driver in the auto market, but its also about technology; not just power and efficiency, also safety, comfort, and electronics. See the difference: I would refuse to work on a Windows 95 machine today as much as I would refuse to drive in a 1920s car. But putting all the known issues like device drivers etc. aside, you could as much use Windows XP today as you can drive in a 1950s Ford Mustang today. I say “could”, because you won’t as the 1950s Mustang can’t connect with your iPhone and has no SiriusXM, and no airbags. And at least here in San Francisco your friends might suspend friendship until you get a Prius instead.

Thus, my conclusion is, Windows XP probably was the best OS that Microsoft ever made. It solved problems, it created a standard, and all subsequent versions that changed more than just cosmetics were greeted with incredibly negativity, happened to both Vista and Windows 8.

Without too much looking back, Windows XP has to go. It is >10 year old operating system technology, its looks are antique, and maintaining the old OS is a pain for Microsoft, ISVs and users. Home users, please – get a new personal computer. If you bought a machine so many years ago, it will be slow, web browsing is not fun, if you have a laptop it is probably heavily overweight by today’s standards and I bet the battery will be broken already.

Different picture in the enterprise world: CIOs with big pockets have already bought thousands of new devices, with Windows 7 pre-installed. Still there are just so many corporate and small business PCs running XP left. Those CIOs that are using VDI/Desktop Virtualization – in a nutshell: install your user’s OS in the server and just stream the pixels out to their devices – will most probably know my company Stratodesk or myself, that we repurpose PCs with NoTouch. Yes it is paradox, but I’m a bit sentimental about Windows XP while actively developing and marketing a product that wipes Windows XP and replaces it with something better. But there is no space for doubt: Running XP after the official support-end date of April 8, 2014 is a substantial security risk, both for home and business users. Let’s keep Windows XP in “our” memory, not in RAM and not on hard drive memory.

Is a non-stable API an API? API stability reviewed…

Today I read this blog entry written by a friend wo works with Ember.js:

I’m not talking about Ember.js or UX or frontend. What strikes me most is this API stability thing that was mentioned. Three productive versions of Ember.js, they are about to roll out a fourth; reading these lines made my mind spinning around some API things, in unordered fashion: I remember our own experience from migrating our app to Python 2.6/2.7 from 2.4. I think of Java and the days when our product was still supposed to compile on Java 1.4 and the build stopped just because somebody checked something in that used the contains() String method (Oh yes, Java 1.4 did not have this). But I also think of how slow “standards” evolve, the moment there is a body concerned with standardizing, some kind of industry gremium, then things are usually dead (exceptions of course exist). In the perfect world scenario, stability is almost any API’s surname. But it is not the case in real life.

One might say that scripting language are especially poised for unstable APIs since they allow for such easy change. I often hear this when people try to point out to me that Java is so much enterprise and the interpreted languages are so much for script kiddies. Honestly, I disagree. It is not a virtue to let laziness dissuade you from an API change you have planned – in other words, if you have a reason to change your API, you’ll do it, it is just more work in Java than in, say, Python or Ruby.

Then there is thing about rapid innovation. Again, the moment a “standardization body” is concerned with something, things usually go at much slower pace. Having multiple versions of something is most probably a smaller problem than stifling innovation, even in an enterprise environment.

There is one big exception to this rule – the moment when multiple modules with inter-dependencies are getting unstable. You start with module x, you’ll find out that x requires y, y requires z, but the current version of z is not compatible with x. You can or could find such things in Apache Commons (which is a Java component library, not the web server product). Another proof why it is not the language that makes stability or not.

So, to summarize, if anybody thinks “oh no, so many different API versions”, I’d say, “well, for sure not optimal, but not that bad”. As long as these are isolated modules and not cross-referenced with other such modules in multiple versions, that would for sure put you in the software engineering hell…

Fun fact: If you search for API stability on the web, you’ll get tons of links. However, once again this proves the world is not just about us, the IT folks. API also stands for “active pharmaceutical ingredients”…


I have been using WordPress now for more than eight years. What a long time! It has powered my tiny little Outdoor Weblog from the early beginnings in late 2004. For those unfamiliar with WordPress, it is an open-source content management system (CMS), the thing that powers a modern website.

I remember how easy everything in WordPress has been, be it content editing, web design, or even code modifications. The company webpage was running on another system named Joomla at that time, allowing that to happen is on top of my list of really wrong decisions in my life.

Now my company webpage is powered by WordPress, and this site is also powered by WordPress. WordPress has kept its simple and elegant database model, its great plugin interface, and its good track record for security and speedy updates. If anybody today is about deciding what CMS to use, I can only recommend WordPress. Its hardly necessary to code – in fact you’ll find a plugin for everything. If I code, I do it more or less for fun. There is one thing however: You don’t need that many plugins. In fact most people will be fine without any plugins – that’s the beauty of WordPress. (Now if only it was written in Python, my favorite programming language 😉 – just kidding.)

It just came to my mind as I was attending a WordPress meetup today in San Francisco, not my first one, but it made me think about how successful this piece of software has been and how much it has helped me and others.

NoTouch 2.35 “Sonora” goes gold!

Finally – we finished all work on our release codenamed “Sonora” (thinking of the beautiful Sonora Pass in California). A release is always an extra burden on everybody, at least in our team. Everybody throws things into the discussion what has to be improved, what has to be done, what features would be cool, etc etc. I have to admit that I am exactly one of these persons. I can not easily throw something out without knowing that the thing works and is useful. For those not familiar with the product and wanting to learn more, please check out the NoTouch website.

Now what did we do? First of all, the client operating system was updated to 2.35. In the operating we usually update all the third party components that we integrate, such as the Firefox browser, the Citrix client, the VMware View client etc. Now Citrix hasn’t supplied an update, but VMware has, and of course there is a new Firefox browser. Well, whenever I look, there is a new Firefox browser. Besides these integrations we update our own stuff. Some parts are user-visible, some aren’t. In this release we have put a great emphasis on usability and security, so we have switched all communication protocols to HTTPS by default, even the local client’s web interface. The actual list of changes is fairly long, it spans three pages with just one-line-change-descriptions: I can rightfully say that Sonora has been our most productive release ever.

Sonora Pass Road
Sonora Pass Road

NoTouch 2.35 however is just half of the story. On the administration and management side we have NoTouch Center – a product that our customers typically install once and use this single instance to manage their whole PC/TC infrastructure. This piece was updated to 4.1.24. Since these are separate modules and may be update independently we keep separate version numbers, but introduced the release code names as a kind of bracket around the releases. So NoTouch Desktop 2.35 and NoTouch Center 4.1.24 make up “Sonora”, together with the newly introduced Stratodesk Virtual Appliance, a 64-bit Linux virtual appliance that contains NoTouch Center and a very easy-to-use PXE environment.

So even when writing this I just think “phew…” – so much. But the reactions from those people testing both OS prereleases as well as the Virtual Appliance were just overwhelmingly positive. I like working with my customers. After all there are just three stages of use: the lab, a known-friendly environment (beta customers), and general use. Going through the beta makes you ready for general use. As mentioned above, so many issues were discussed and we have fulfilled almost all wishes. From updating the last Python-2.4-based components to Python 2.6/2.7 which is absolutely not user-visible but the groundwork for other improvments (such as, in this case, HTTPS) up to textual changes in the first-time-wizard. This is a two-edged sword however. The more you improve and fix, the more you will be loved by your customers. The more new features you add, the more future customers you may have. The more you change however, and the longer a release takes to get ready, no matter how good it is, the more your customers get angry. Balancing these targets is not easy to do. However it works out, the moment you have a release, you’ll be just incredibly happy…

Updating from Python 2.4

Python is truly my favorite programming language. I considered myself to be an ANSI C hacker for a long time, I did and do Java, PHP, Bash. But nothing beats Python. Its elegance, its “pythonic”. So it comes that my product NoTouch Desktop had some code in it that inherited from the early beginnings and it was based on Python 2.4. While there is already a Python 3.2 out in the wild, most people and operating system distributions consider 2.6 or 2.7 standard. Ubuntu 12.04 comes with Python 2.7, as does Mac OS X 10.8.

Not a big deal? Ha! Nobody can imagine what has to be done when a Python-2.4-based software stack is to be ported over to 2.6/2.7. Finding out which backported modules have to go, what has to be rewritten is a tedious task. Sometimes its just plain trial-and-error. In the end we were surprised how things worked out well. I remember when a friend told me how much his company had to invest when migrating from Ruby on Rails 1.8 to Ruby on Rails 1.9. So no disadvantage for Python here.

What proved to be difficult was not just the Python per se. It is the dependency stack we had. So for example our software used an older module that worked perfectly fine with Python 2.4. Updating to a newer (please take a second to remember Aaron Swartz) that would work with Python 2.7 however would break other things. Oh well. Eventually we did it and as already mentioned things worked out really well.

Still, there are some things in software development that you just want to see behind you. Goodbye Python 2.4!