A Lesson to be Learned from Delta Airlines’ System Outage | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

Bimodal IT has become common parlance. It is the term for the underlying assumption that large enterprises require stability in their core IT systems, and should therefore not take the risk of migrating to new technology. Instead, the old systems should be maintained in their current condition – or a condition that is fundamentally the same with only minor adjustments – and only use current or next generation IT for new requirements.

While the logic seems to make sense on the surface, there is a major flaw in this argument. IT systems aren’t like fine wines that get better with time. IT systems become old and outdated, just like your own personal computer. Think about it – how many years did it take you to become totally frustrated with the computer you LOVED when you first got it? It was the best thing on the market, was lightning fast, and did all the cool new stuff.  But five years later, it has slowed to a crawl, the operating system isn’t compatible with the latest version of the programs you want to use, and no one has the components or software required to service it. There comes a point in time when it is more risky to hold on to the old than to migrate to the new.

Now let’s test this argument to determine whether it makes as much sense for the enterprise as it does the technology forward consumer.

Delta Airlines’ entire system was shut down due, at least in part, to some of their core IT systems not switching over to backup when Georgia Power encountered a switchgear failure that is the equivalent of blowing a fuse. The result? More than 650 flights were initially cancelled, with thousands more likely at risk of being cancelled later in the day. Of course, the impact of an outage of this magnitude is not limited to a single day. Somehow, all of the passengers still have to get where they were going, and will need to fit in on other flights. There is no way Delta will be able to accommodate all those people, so other airlines will have the opportunity to pitch in and serve those stranded by Delta.

The outage began at 2:30 am, and Delta resumed flight by 9:00 am. In just six and a half hours, Delta lost millions of dollars, ran many of its passengers off to other airlines, and served a major blow to its hard-fought battle to build a reputation as the most reliable U.S.-based international carrier. All in just six and a half hours.

This should force enterprises to look at their IT systems and ask just how stable they really are. Their systems, assuming they are similar to most major airlines, were likely built in the 1990’s – more than 20 years ago. Given the rapid pace of technology changes, a 20 year old system is practically ancient. Who has the skills to service systems that old? How well does the system integrate with new technology? What level of customization and patching together has occurred over the past 20 years to keep the system relevant? How has the system been able to address the vulnerabilities and performance issues resolved by newer technology? In other words, what is the underlying risk associated with an IT system that is a couple of decades old, and is it reasonable to expect it to get better?

Bimodal IT might make sense in some instances when the cost of the risk of obsolescence is low. It can also make sense as a short-term strategy. However, nothing is stable forever, and the increased complication of maintaining old systems adds risk that slowly creeps up over time.

Yes, transitions to newer technology are both costly and risky. But so is having an unexpected total business shutdown due to the inability of an ancient system to continue to perform as it has in the past. Transitions to newer technology can be planned and managed, and seem to be much less of a gamble in the long-run.

What You Need to Do to Get a Business Performance Breakthrough | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

At Everest Group, we’ve been studying the reason behind the disappointing phenomenon of powerful new disruptive technologies achieving only modest, incremental benefits instead of their promised performance breakthroughs. In my recent blogs, we’ve looked at whether the fault could be due to hype or immaturity of the technologies, whether it might be a lack of talent or whether there is an inherent conflict of interest in companies’ incumbent ecosystem. Not one of those things is sufficient to explain why we’re not getting the breakthrough in performance that is ripe for the taking if we can get there. We think all of these factors may contribute to the phenomenon; but these factors don’t seem to be powerful enough to prevent the breakthroughs.

We think a missing ingredient is the organization. When companies implement these new technologies, they must also change the fundamental organization.

At the moment, these technologies tend to be implemented to save costs, whereas the change in performance is far more than cost savings. It has to do with customer experience and cycle time. Although cost is often reduced, that is a byproduct of the greater performance that is generated.

As the buyer of the new technologies, the remedy is to be willing to step back and understand that you have to create a new strategic intent. That intent must focus on performance. It also requires a willingness to address the organizational dynamics. Whenever you digitize a workforce or you embed analytics into it, this affects how you organize work. So often we see people attempting to bring in tools but just adding them to the existing organization. That doesn’t work.

Fundamentally, to get a performance breakthrough, you have to rework your organization. Doing that means significant change across all the pieces. Along with creating a new strategic intent, you have to change your organization, your ecosystem, your technologies and your talent. All of those components have to come together and focus on the promised improvement you’re seeking. Only then will you get the step change performance. If you do them individually or only partially, you’ll only get is more of the same. You’ll get a better status quo, not a changed status quo.

Are Performance Breakthroughs Failing Due to Lack of Talent? | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

Where are the performance breakthroughs?

If you’re following my blogs regularly, you know that I’ve been discussing what we at Everest Group think is the issue of our time. Vetted, powerful new technologies such as cloud, analytics, cognitive computing and robotic process automation (RPA) should be making big differences in businesses; but for the most part, they’re achieving only modest, incremental benefits. I’ve also blogged about whether the maturity of the technologies is the reason for their not delivering performance breakthroughs. We need to also consider whether talent is the reason for the lack of a performance breakthrough.

A reasonable question is whether companies have people trained in using these technologies. Is the outcome of only modest benefits because of the IT talent? Do we need to replace our existing workforce or completely retrain our workforce?

In answering that question, I go back to the story I related in a prior blog about the breakthrough transformation American Express achieved in introducing its organization to agile development and DevOps. Yes, they spent some time retraining the IT organization, but they didn’t have to replace them. The people quickly adapted to the new technologies.

H. D. Smith, a pharmaceutical distributor since 1954, transformed its business to the digital world and expanded to providing innovative services and solutions. As I previously blogged about this case, there was some dislocation of existing staff; but for the most part, the existing people mastered the new technologies.

So we can’t explain the lack of performance breakthroughs from powerful, disruptive technologies as a lack of talent or a training issue alone. Yes, it can contribute to it. But there is plenty of talent to drive breakthrough performance, particularly if the promise of the technology is as big as it is. Furthermore, cost should not be a big issue for the kind of benefits that these technologies promise.

In my next blog, I’ll discuss another possible culprit for this phenomenon of only seeing modest, incremental benefits instead of performance breakthroughs from powerful new technologies.

How the IT Model Impacts Realities of Changing Economic Conditions | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

How CIOs can scale IT costs to fit challenges of oil and gas and other commodity businesses.

Businesses dealing with cyclical commodities face ongoing challenges from fluctuating market pricing forces. It’s easy to understand in the context of a business in the oil and gas industry today. The price of their product recently dropped from $110 a barrel to $30 a barrel. So they have an extreme need to scale down their IT and business environment to fit that new reality.

But incremental steps won’t achieve enough savings. For instance, a labor arbitrage tactic of shipping more work to India is interesting but unhelpful. It would achieve a 20 percent savings; but they need 60 or 70 percent savings to match the new reality.

Here’s the other part of the reality oil and gas and businesses in other cyclical industries face.

Read more at CIO online.