Luis Villafane On Installation Nightmares, And The Chosen Ones

December 19, 2017 by Dave Haynes

GUEST POST: Luis Villafane, MALER

I have been running digital signage hardware since 2001, and over all that time I’ve only found a handful of devices I’d call “The Chosen Ones” – gear I was comfortable putting in production environments.

Hughes Media Signage Ad

Luis Villafane

Some have been incredibly reliable, hitting a minimal failure rate on a 13-year lifespan and only getting swapped out because it was time for a technology refresh. I’ve got other devices that are still running out there and will … not … die.

It is really important to understand the differences between lab environments and production environments. Regardless of how harsh or “real” you make your lab environment, production will always be worst.

The truth is that only your experience will help you understand and reproduce what you’ve encountered in the mean old real world. Even with that knowledge, sometimes you still have environments that can’t be reproduced in a lab setting.

Some have really good stories behind them, that I can tell. Some are protected with NDAs and lawyers I con’t want to meet. Some are just protected by my very own embarrassment.

But here are some tales of horror and happiness, just in time for the holidays. With all of the networks that we run and have run at MALER, here are my Top 3 worst environments we’ve dealt with and the Top 3 pieces of hardware we have encountered over the past 17 years.

Worst Environments

3. Buses in La Coruna

This was our first true “corporate” installation, in our hometown. At the time, there weren’t solid state drives, or true transport computers, no power supplies that adapted to a Bus alternator/battery, nor suspension systems for the HDDs. Not even TFT panels for buses (just TVs). Well, maybe I should say… that we could afford.

So, everything on that installation was done by hand. The power supply was made in our offices, the computer boards for the power supplies were hand-made, the computers, the shock absorbing system for the hard drives, everything was made by our team. Now that I look back, I am surprised nobody got electrocuted.

Anyhow … no matter what we tried, about 50% of the computers came back to the garage completely off. We changed the systems, reviewed them, added a UPS … even went into different buses for days at a time to figure out what was happening. But we could never find a fault. They always worked when we service them, every single time.

Final result? Well, we had the great idea to add audio to all the adverts to a 20-minute loop of repeating commercials. The bus drivers were going crazy, and powering off the systems because they could not take another repetition of the jingles. I cannot say I blame them. I would have probably taken a hammer to them if I was driving. They just said “it powered off” – because they didn’t want to get in trouble.

Buses = bad.

2. Fast Food Restaurant, 2005

Grease. Heat. I remember an engineer calling me during a support call, telling me that he put his pen on the back of the screen; and just the built up on the panel was enough to hold that pen in place. Amazing.

We learned to read the fine print on hardware warranties. Manufacturers set up definitions and formulas regarding operational temperatures that are impossible to understand – anything but logical and even harder to prove wrong. However, when you do prove them wrong, it’s a great feeling. I can still hear the silence in the room when I showed off my findings on the last hardware failure.

3. London Underground
Damn, Damn, Damn and Damn. This was the nastiest of them all.

We produced so many software workarounds to cope with hardware failures on that system that I should get a PhD from some engineering school. We had 5 Digital Escalator Panels in our offices for six months. We did everything we could think of to them to make them fail. Do note they were running on Compact Flash Cards (I think still do) – memory cards that were normally used for cameras, not to run an OS for 16 hours a day at 60% average CPU).

So, over those six months, we had no failures in our lab. We must have cut the power on them one billion times, corrupted the OS, reloaded it remotely, and crashed the OS SO many times. But they always booted up and always played content.

First day in a production environment and the client was of course there to see it all. The place was full of people with ties, taking credit for something they knew nothing about. Someone decides to do power down and power up to have all the systems fresh for the visit, and when the power came back up, only four systems out of 44 were displaying content.

I remember the phone call as if it was yesterday. Talk about new ways to use the F Word.

We fixed it quite quickly, but still sometimes I still dream about that phone call. We had to give a Masters degree-level dissertation on electrical engineering to explain what was happening. I still don’t know why, as we just provided a software solution. Talk about suppliers pointing fingers!

I also prepared a TOP 3 WORST hardware systems, but my lawyer told me to be quiet, as some of those companies are still in operation today designing more crappy systems. .

Best Hardware Systems

3. TEW Digital Escalator Panels

Brilliant piece of hardware, but only on three stations. Brilliant engineering. These German guys sure knew how to engineer a system. It was beautiful just to look at the inside of the machine. It did have its troubles, but it was as solution truly manageable. It allowed to change the boot sequence remotely, reload the OS, and have it fresh as new in 10 minutes. For a system with compact flash disk, without cache, it was heaven-sent.

2.Samsung DXN2

I have to recognize it, even if it is a monster company like Samsung. The Samsung DXN2 – a panel with a PC inside – was (and is) a brilliant and reliable piece of hardware, that is still working in many of our sites. We want to do a technology refresh, but the darn things just won’t die. If I could get my hands on more DXN2 machines, I would remove any of the current “newer” systems and put those ones in their place.


This will come to a surprise to many, but back in 2002 we took over a system in Switzerland that ran at all of the TAMOIL gas stations. The hardware was designed and certified for petrol stations operations by Swiss IBC. From 2002 to 2014 – 12 YEARS! – we had a 0.2 failure rate. No corruptions, no failures. Amazing.

They were Pentium PCs, running Windows 2000 Pro. Indestructible. As time passed, content requirements were a little more demanding, so they needed to be swapped. (I remember a client sent me a 60 second 2GB .mov file …hahaha). If we were still playing mpeg 1, they would still be running today, for sure.

Of course, there are many more hardware systems that performed very well. The Nexcom & Moxa systems on Danish Rail & Buses were really good, but we only stayed around for a year, so I don’t know what happened to them after that. The Seneca I5 systems we have been installing for the past two years have 0.0 failure rates. I mean, damn! But … it’s just two years, and I want more time to put them on my all-time list.

Want to know the worst? Buy me a drink at ISE, and I will tell you all about it … informally.

Enjoy the holidays!

  1. Kevin Cosbey says:

    It’s great to hear that after 2 years in the field, the Seneca Media Players haven’t failed! At Arrow, we’re committed to engineering, testing, and manufacturing the most purpose built Seneca media players in the industry. Thank you, Luis Villafane, for your feedback!

Leave a comment