Author: Yugal Joshi

Pokémon Go Is Here, but Can You Handle It? | Sherpas in Blue Shirts

In a seeming nanosecond, the mobile game Pokémon Go has shown us how:

  • Demand and “going viral” are unpredictable
  • Technology is not an objective; it continues to be an enabler
  • Rapid technology scalability facilitates improved user experience (or lack thereof)
  • Testing and automation need to be given due attention

Here’s why
On 6 July 2016, Pokémon Go – an augmented reality game launched by Nintendo and Niantic, a Google spinoff – was released in the Australia, New Zealand, and the U.S., and has already taken the Internet by storm. Its fan following and welcome is largely unprecedented, not only in the technology world, but also in the equity market. Nintendo, the part owner of the hugely popular Japanese franchise Pokémon, has already witnessed a sharp uptick in its stock price, soaring 23 percent in a single day, the company’s best one-day jump since the 1980s.

So what’s driving this popularity? First, people’s associations with Pokémon, which has a global audience across a large age group, and high merchandise sales. Second, the immersive experience of the game, which brings augmented reality into the mobile phone and engages users to take steps (literally) to interact with the Pokémon that appears on their smartphones.

Without putting you to sleep with talk of how smartphones are pervading the user environment and how they are redefining the digital landscape, here are the important take-aways:

Demand arises out of nowhere
When Niantic began developing the mobile game, it had no idea that it would be wildly downloaded on both iOS and Android devices. In fact, in terms of app downloads, it has already overtaken Tinder, a popular dating app, and is well on its way to beating Twitter! That’s commendable, given that Tinder was launched in 2012, Twitter in 2006, and Pokémon Go in July 2016!

If that’s not enough, Pokémon Go’s daily active users, a metric used to measure audience engagement, is touted to be much higher than even Twitter. The Internet is flooding with data points showing how the game is beating every standard metric used to measure success. In fact, its usage time is higher than that for the communication app Whatsapp, photo sharing app Instagram, and the rainbow filters of Snapchat.

Demand for software can spring up from anywhere without warning or notice. Even a simple game can become an overnight hit, grabbing as much media attention as Brexit. No amount of planning will help you be completely prepared for unexpected demand. When demand is so unpredictable, what can you, as technology practitioners and service providers, do?

Test, release, scale … repeat
Time and again, our conversations with enterprises and service providers have been limited to the context of technology without it being leveraged to achieve core business objectives. This overemphasis on technology, without understanding the true nature of its ability and purpose as an enabler, is misleading and, in a business environment, dangerous.

The developers of Pokémon Go had to learn this the hard way. They clearly had not anticipated such massive demand in only three countries. It reached such proportions that their international rollout plans had to be put on hold until they got their house back in order.

Users have reported cases of the game failing to load, servers going down, the game crashing mid-play, account integration with Google failing, identity management not working, camera function for the augmented reality piece backfiring, and the data management function erring. As a whole, the only thing consistent, other than the dramatic popularity of the app, is the failure of technology to scale up to meet the demands of its Poke-hungry consumers.

This highlights the importance of testing in a mobile context. When apps and games are swarming the app stores on a daily basis, there is a dire need to perform stringent levels of load testing to ensure that demand spikes can be handled. This obviously requires the application and the underlying infrastructure to be highly scalable in a short timeframe, especially when popularity (and hence the consumer experience) gets defined overnight. Testing and operations, in the context of application services, need to work in tandem to pre-empt and prevent such instances. Further, these incidents bring to the fore the need for infrastructure automation and rapid provisioning. Cloud-based models that lack rapid scalability (both up and down) simply do not serve the purpose.

Pokémon aficionados might forgive the app developers for their shortcomings for a while. But keep in mind … user forgiveness is fickle and short-lived.

Enterprise Mobility: Downloads, Usage, and ROI | Sherpas in Blue Shirts

As part of Everest Group’s digital services research, we discuss digital transformation initiatives with multiple enterprises. Of course, mobility is one of the cornerstones of such conversations. In a recent discussion with a retail company, I was told how they transformed their store operations by providing mobile-driven BI solutions to their managers.

Although the business case was strong and the results were highly evident, when pushed for the measurement of success, I received an intriguing reply that was in line with data captured in our earlier published research on moving beyond feel good ROI (Return on Investment). The enterprise said that more than 80% of its store managers and employees had downloaded and accessed the mobile app and that this was a great success.

Such discussions are not uncommon. Enterprise shops running mobile initiatives are typically benchmarking their success based on how many users install the app or how many actually use it rather than the business impact. Let’s be clear on three things:

  • App downloads do not mean app usage: This is something that should not even be discussed. But still, app downloads have become a metric for the IT teams developing mobility solutions. This implies that they are either fooling themselves or the business users about mobility. Anyone can download an app, but it does not imply usage. However, from a project team’s perspective, this metric is easy to capture and used to impress other stakeholders.
  • App usage does not mean ROI: Moreover, even if the app is being used, does that mean it has achieved its objective? How is the ROI defined? A lot of enterprises define ROI based on the usage metrics, and a “good” number is considered to be the final objective in itself. It’s like running a 100 meter sprint with a KPI that 15 seconds is great, without realizing others might be finishing in under 10 seconds. Again, this is an easy metric that can be shown to the business and make them believe that mobile initiatives are gaining traction.
  • ROI does not mean business impact: An ROI calculated on usage metrics, though meaningful, still does not track the business impact. What is the eventual business impact of the app? How are you going to track that? Metrics like these are what enterprises should be concerned about.

Why do enterprises choose to do this?
The simple reason for tracking ROI based on usage or downloads and not business impact is that the latter is extremely difficult to measure and correlate. The fundamental attribution of business success to an app is difficult and, therefore, enterprises take the easy way out of collecting download and usage metrics. Business outcomes depend on a number of moving parts. An app can bring a horse to the pond but can’t make it drink water.

Is this scary?
Some would argue that if business impact is hard to measure, then enterprises should not use it to define ROI of digital initiatives such as, enterprise mobility. But isn’t it scary that enterprises are making such investments simply because of access to a cool new channel through which they can share information with their external or internal customers? Is the “hope” that enterprise mobility will eventually have a business impact a good enough reason to invest? This nicely ties to our earlier published research on digital investments.

The road ahead?
Are there other KPIs to measure the effectiveness of enterprise mobility initiatives? If yes, how do they link to the business impact? Enterprises need to think through three fundamental aspects:

  • Define the business impact: Is it improving the top line, enhancing customer experience, addressing customer churn, creating better personalization, targeted messaging, or driving operational efficiency? Enterprises may want to address a lot of the above and more through a single app. However, they need to spell it out in terms of the business impact the app needs to demonstrate.
  • IT-business partnership: The above challenges are not only the making of enterprise IT. In fact, enterprise IT is like the messenger. The bigger problem is from the business side. Line managers who are unable to grasp their business objectives and visualize the impact of enterprise mobility to drive these. For this, enterprise IT needs to partner with business stakeholders in creating a value journey map in terms of the expected ROI from each stage of mobility adoption. The final stage must be a tangible business impact.
  • Define failure: My market interactions suggest that 70-80% of apps do not meet their intended objectives. This could be a fault in objectives or the way they are being measured or in the way they are being worked towards. Therefore, enterprises need to have a better definition of failure before they reduce investments in specific enterprise mobility projects.

While there are only two things of importance in business, revenue and cost, there are multiple levers that drive these fundamental outcomes. Enterprise mobility can impact these levers as long as the right KPIs are defined. Impressive metrics around downloads or usage do not really serve a meaningful purpose. Enterprises are not digital start-ups whose valuation is driven by usage rather than real money. Therefore, if enterprises want to really extract value from their mobility initiatives, they have to develop measurable KPIs linked to business impact instead of easy-to-measure feel good factors.

How are you measuring your enterprise mobility initiatives?

IoT Analytics: Living on the Edge or Sailing in the Cloud? | Sherpas in Blue Shirts

The technology vendor landscape of IoT edge analytics is heating up, with providers such as Cisco and IBM collaborating in an edge partnership, GE Digital’s Predix augmenting its edge capabilities, HPE boosting the capabilities of its Vertica portfolio, and PTC adding muscle to ThingWorx.

They and all the leading IoT analytics vendors, including AWS IoT, Microsoft IoT Suite for Azure, and SAP HANA Cloud IoT Service, are adding or enhancing their edge analytics capabilities, as there is a growing realization that cloud-centered analytics may not be able to deliver the necessary run time “action” expected of the IoT.

At the same time, the market realizes that edge analytics does not add meaningful value in terms of detailed insights into a system, and is generally suitable for small-sized projects. Moreover, many enterprises believe that though edge analytics reduces the entry barrier to the IoT, it does not provide meaningful value if not enhanced with a cloud-based data crunching system.

Enterprises that deal with a very large set of IoT data will normally have requirements for both real-time analytics and detailed insights. Thus, they may consider splitting their analytics lifecycle into two parts, running analytics models on the edge and performing detailed IoT analytics and model development in a cloud. However, this is easier said than done, and will require creative thinking from the architects, application developers, and device makers. It will also require enterprises to have different efficacy models for edge analytics and cloud-based analytics. For edge analytics, it’s about driving real time decisions or device-specific analysis, while cloud-based analytics is most suitable for fundamentally enhancing the IoT value chain.

IoT architects understand the underlying limitations they will face to run analytics on edge devices (e.g., memory, capacity, and power consumption), yet they are being pushed by the business to be as close to the data as possible. Smart architects will quickly realize that crunching data at the edge can be counterproductive, expensive, and many times just impossible. They can smartly address business requirements by delivering a “centralized + edge” analytics scenario. They should also segregate analytics operations and execution, and creatively deploy these at the edge or in the cloud as needed.

However, with all this confusion running amok, enterprises are finding the IoT landscape technologically overwhelming. Everest Group’s digital research suggests that though ~50 percent of global enterprises have piloted some type of IoT projects, less than 20 percent consider IoT to be in their top three investment priorities. The key reason is that enterprises are unable to accept that they are ready for the IoT, and thus continue to be fence sitters. While there are headline-grabbing case studies, especially in the industrial IoT space, most enterprises are still figuring out the IoT jigsaw puzzle.

Technology vendors, system integrators, and other market participants need to play a constructive role in shaping the IoT, a once-in-a-generation opportunity, into reality. It will be a great pity if they continue to focus on a short-term agenda of selling new technologies to enterprises, rather than meaningfully helping them deploy IoT solutions in their business.

Enterprises will have to be cautious in creating the most suitable IoT architecture for each project. Beyond the technical feasibility, they will be well-served by understanding the vast landscape of IoT technology, and getting a good grasp on the leading IoT technology vendors’ offerings.

Legacy Technology = Technology that Worked, and It’s Time We Showed Some Respect | Sherpas in Blue Shirts

The common theme in all my market conversations around digital disruption with enterprises, technology vendors, and system integrators is the word “legacy.” But, no one is clearly defining what a legacy technology is. Is it three years old, or three decades old? One that entails costly support, one with diminishing skills, one that is proprietary?

Technology can become legacy only when it has worked well, and still serves some purpose. Indeed, as true legacy technologies helped make businesses what they are today, they deserve some overdue respect. Of course, on the flip side, many technologies that should have been decommissioned lingered on not because they served well but because switching was costly and risky.

Legacy technologies continue to run the most mission critical workloads in enterprises. However, times are changing fast. Moore’s chasm is reversing, and enterprises are now chasing startups, or incubating internally, to lead technology-driven business transformation and disruption. With this, enterprises must address numerous critical questions. How should they go about selecting the legacy technology most suited for upgrade or replacement? How do they make a business case beyond cost savings? Do their CIOs and IT leaders have sufficient data points regarding this? What is the surety that they won’t regret their decision three or five years down the road?

One big challenge I see is that enterprises believe their legacy technologies are sacrosanct and should not be altered or experimented with. This has worked beautifully for technology vendors and systems integrators who have fed on these fears to sell their solutions. They promise to “integrate” legacy with newer technologies without disrupting ongoing operations. They are overzealous in committing that the investments in legacy will be protected.

While this sounds good in theory and has worked in the past, it has outlived its utility. For digital business to work, legacy technologies must be meaningfully altered and upgraded to incorporate the fundamental concepts of newer paradigms. These include an open architecture, service orientation, environment independence, dynamic resource allocation and consumption, and elasticity. Anyone promising to “protect” legacy without introducing the changes above is lying or creating a poor solution.

Though today’s technology is tomorrow’s legacy, the pace of legacy generation in the future will be exceptionally rapid. Enterprises will not be able to make their three-year, five-year, or ten-year plans, and will have to rely on extremely agile operations to ensure they can plug and play the most suitable technologies. Unfortunately, things are not getting any simpler. The myriad of technologies with their own protocols and lack of standards, multiple APIs with different performance characteristics, proprietary cloud technologies, and other similar disparities are again creating integration challenges.

What is the way forward? Enterprises cannot control the flow of technologies available in the industry. Their best bet is to invest in people who are going to use these technologies to create business outcomes. I believe the days of technology specialists are fast fading, and enterprises will require “multi specialists.” These are resources who understand technology beyond the monocular views of application developers or the operational view of the IT organization. They understand how and why different services should talk to each other, how to develop fluid, self-contained workloads, how to design systems that are open and allow technologies to be hot swapped, how to leverage external systems, and how to continuously monitor the impact of technology on the business.

However, new systems are becoming open yet more complex, vulnerable to attacks, costly to maintain, and difficult to architect. Enterprises are insufficiently investing in the people who drive the technology agenda. The silos of technology and business continue, while they ideally should have collapsed. And although the idea-to-cash cycle might have reduced, it has not been fundamentally altered.

Therefore, despite their best efforts and all the technologies available, enterprises may find themselves right back where they started – the dreaded legacy.

What do you think is the best way forward?

 

Enterprise Technology 2016: What Will and Won’t Happen| Sherpas in Blue Shirts

Now that the dust has settled from the New Year frenzy, it is a good time to channel our inner psychic and do some crystal ball gazing about enterprise technology trends. Following are the technology trends that we see playing out in 2016 and into early 2017.

  1. Customer centricity and UX are king

The fundamental disruption being caused by consumerization of the enterprise IT has profound implications on how organizations approach the user experience (UX). As consumers’ expectations and benchmarks for next-generation channels evolve, UX is key in enabling the digital mandate. This translates into an enhanced focus on superior design, collecting data (user behavior, regional preferences, A/B testing, and demographic information), and personalizing content. Design coupled with the appropriate tracking/monitoring will be crucial in driving meaningful engagement through a personalized UX. While global technology providers have generally lagged in bringing UX and design thinking into solutions, this is changing. Whether it is Accenture (per its 2013 acquisition of Fjord), Infosys (with AiKiDo, its next generation services in Design Thinking), or Wipro (via its 2015 acquisition of Designit,) service providers have started looking outside their organizational set ups to develop these capabilities through M&As, acqui-hiring and setting up separate business units, often outside their P&L play.

  1. Open APIs to catalyze innovation

 Numerous examples of unlocking barriers to provide open access to APIs to catalyze innovation, gain developer trust, and accelerate the pace of use-case creation emerged in 2015. For instance, in September IBM acquired StrongLoop, a provider of popular application development software (enterprise Node.js) that enables software developers to build applications using APIs. In November, IBM launched API Harmony with cloud-based API matchmaking technology for developers. It also opened up access to IBM Watson’s cloud-based API. In an attempt to woo developers, Salesforce announced App Cloud, which integrates its existing Force, Heroku Enterprise, and Lightning services to create an interactive learning environment for “citizen developers” creating Salesforce apps. Apigee, a company that helps organizations build and manage API connectors, went public in April 2015, and its revenues and margins are performing well. It has also witnessed traction with large enterprises such as AT&T, Bechtel, Sears, and Walgreens, to name a few. Given how crucial APIs are to advancing innovation and enhancing the digital experience, we’ll see many more technology companies jump on the open API bandwagon.

  1. DevOps, ITOps, NoOps, and ShadowOps, will continue to slug it out

The emergence of new operating paradigms continues to transform IT operations. DevOps, the latest, promises quick and reliable unified development and operations to meet business needs. Then there’s conventional ITOps, and NoOps, an extension of DevOps wherein developers take over all responsibility for processes such as architecture design, capacity planning, performance optimization, etc. In the absence of a clear winner in 2016, there will continue to be various shades of these methodologies in place across various industries/organizations, depending on maturity of IT set up, specific needs, business constraints, regulatory requirement, etc. DevOps adoption will continue to struggle to move beyond lip service as organizations grapple with challenges related to change management, restructuring, talent, and governance to manage complex IT environments.

Read our previous take on DevOps

  1. IoT – Let’s cut to the chase, shall we?

The conversation about the Internet of Things (IoT) will move beyond just sensors and connected devices. We have already begun to see the emergence of new business models such as Printing-as-a-Service, Home Automation-as-a-Service, Blood Tests-as-a-Service, Transport-as-a-Service, etc. IoT and the connected world have made these individual products into continuously evolving prototypes that can be enhanced through over the air updates, thereby introducing new features. Connecting various disparate products will lead to improved analytics and, therefore, better forecasting and customer experience, highlighting new possibilities for IoT-based value creation.

  1. Security: CISOs step up to the plate

It is time for Chief Information Security Officers (CISOs) to take their place in the sun. After years of CIOs treating security as a hygiene checklist item, recent high-profile data breaches and global cyber warfare have placed the spotlight firmly on cybersecurity. Our digital services research indicates that 70 percent of enterprises believe cybersecurity is a major concern in their digital journey. Cybersecurity initiatives also rank as the second most important among digital enablement priorities. In the single biggest affirmation of this change, the White House announced on 9 February, 2016, that it is seeking to hire its first Federal Chief Information Security Officer as a part of a new Cybersecurity National Action Plan. As security takes a seat on the board, enterprises will start treating cyber risk at par with financial risks. CISOs should see budget approvals getting easier as they look to revamp cybersecurity preparedness, enhance audit and governance controls, and shift the focus from prevention to mitigation. Security will gain a more prominent place in public discourse in the context of 2016 U.S. presidential elections (you may recall that attackers targeted both presidential candidates’ websites and emails during the 2008 and 2012 elections.)

Enterprises need to take a hard-nosed look at their technology spend and realize that the walls between business and IT need to break down. All aspects of IT – application development, maintenance, testing/QA, infrastructure – are getting aligned to specific business outcomes for greater visibility, predictable demand, enhanced governance, risk mitigation, and audit control.

These themes are already sweeping the global technology landscape, and will only gather steam as the year progresses. We would love to hear what you have to say about enterprise technology in 2016, and beyond.

Technology Disruption: Is This the History of the Future and a Bankruptcy of Innovation? | Sherpas in Blue Shirts

The “technology disruption” euphoria is everywhere. Though cynics say the technology industry comes up with something every five years and something big every ten, this time it could be different. Not everyone, however, buys the concept of dramatic disruptive change and supports “incremental innovation.”

The incremental innovation supporters say that many things “appear” similar to what they were ages back. But, when you peel back the onion, you see lots of changes…think today’s cars, compared to those in Henry Ford’s time. Their take is that because incremental innovation doesn’t grab headlines, it’s considered boring. On the flip side, supporters of disruptive changes point out that Edison could never have made the light bulb by incrementally innovating the lamp.

Irrespective of which camp you are in, it’s clear that things in the broader technology world are increasingly resembling what they wanted to compete against. For example, check out Amazon now planning to open brick and mortar stores.

Here’s a look at three major technology evolutions from the recent past and how they are taking us back in history.

  • Mobile apps: Web applications were considered the panacea for everything wrong associated with desktop applications. Unlike desktop applications that required installation on different devices, web applications were easy to access, and could be accessed from anywhere on any device via any OS, as long as the user had a network connection and a browser. However, mobile apps are taking us back to the desktop world where applications (or apps) need to be individually installed (albeit a lot more easily), individually managed, secured, and should work across multiple form factors and OSes
  • Edge computing: As the cost of network bandwidth skyrockets, organizations are realizing that the idea of central computing is very costly. Therefore, the data in various analytics platforms is processed at the nodes themselves, rather than at a central junction. Additionally, in the broader Internet of Things (IoT) ecosystem, organizations just cannot afford, either economically or technically, to send data to a central crunching unit. For that, the IoT edge devices need to compute themselves, instead of having an always on data channel with central servers. This means going back, at least partially, to the days of intelligent end clients
  • Dedicated SaaS: Large technology vendors, unable to fight the nimbler cloud SaaS providers and desperately wanting to ensure client renewals, are taking the dedicated SaaS route. While purists question how SaaS can be dedicated, others argue that if an IaaS platform can be “private,” SaaS can be as well. Many large technology vendors have made SaaS an evolved form of application hosting, which has been around for decades. They add their financial reengineering and deep discounting to lure buyers, without realizing the challenges they will face during renewals. However, the bottom line is they are going back in history.

Of course, we can argue that irrespective of how these technologies are delivered or consumed, they assist in doing new and better things in a different way. However, does this mean that true innovation, whether incremental or disruptive, is not possible in the technology industry? Consider that there have been multiple articles on how corporations are buying back their shares to improve earnings per share (and the linked executive compensation), rather than focusing on R&D. The thesis here is that many organizations are realizing they get greater benefit from returning cash to their shareholders than they do from the risks of innovation. Innovation skeptics also point out that many companies that were the first to innovate never got their returns.

So, is the technology industry destined to just repackage what existed earlier, or will we see something that is fundamentally different? Are we destined to oscillate between different technology models every few years? The bigger question is, should the pursuit to be innovative be for its own sake, or to create some meaningful value? I firmly believe that regardless of whether the value comes from incremental or disruptive innovation, the value needs to be the key driver.

How can technology companies (which these days implies every organization) innovate yet manage the risk in these times of rapid change? How can they jettison their age old planning cycles and be more agile and nimble?

The industry needs to answer these questions to ensure the pipeline of innovation does not dry up. The future of technology is right here, right now, and all stakeholders need to shape it to ensure it neither goes back in history nor becomes a prisoner of old habits that die hard.

What do you think?

Software Development: An Argument Against DevOps | Sherpas in Blue Shirts

Technology vendors and enterprises continuously experiment with new ways of software delivery to ensure the software is bug free, meets the timeline and cost objectives, and can be updated going forward. For this to happen, multiple teams within the software development ecosystem need to collaborate and talk the same language. Enter DevOps.

While the Agile methodology was a meaningful initiative to address the time-to-market aspect of software, it didn’t serve the entire purpose, and release management and operations continued at a sluggish pace. Developers were developing code, but it was rarely delivered into production at the same pace despite “continuous integration” being around for ages. DevOps tries to address this disconnect ensuring tasks such as provisioning, configuration, and security are automated and simple. It attempts to bring different moving parts and teams in an application lifecycle onto the same page, working toward the same target, using the same terminology, and – possibly the most important – working with the same underlying infrastructure.

Opponents think DevOps is a bad idea and much more applicable in startups than enterprises. Not surprisingly, proponents believe this to be a loser’s argument from large organizations and one more excuse not to evolve their archaic systems.

From a software developers’ perspective, DevOps is met with a mixed bag of reactions. Very few developers truly understand it (they are busy coding, their work), some believe in its promises, and some are skeptical. The skepticism comes from the perception that DevOps is trying to shift the onus of managing databases, application servers, and systems to the developers. It is trying to make them what is popularly known as “full stack” developers, superhuman developers who are a data base administrator, a system administrator, a security administrator, and other things all in one.

Many software developers believe that DevOps will add to their already burgeoning overhead, rather than helping them write code that can create great software (I discussed some of this in an earlier blog where I made a case for allowing developers to be more crafty, rather than tied to rote coding). While some senior developers believe it will ensure better time-to-market, reduced cost of quality, and collaboration between teams, they are apprehensive about the role they have to play in this entire game. Though developers do not like raising tickets for everything, and prefer quicker access to infrastructure – which is a key driver for the adoption of cloud services – most developers believe DevOps may be taking things too far for their liking. They also believe that the onus to improve the entire application delivery process is being put on their shoulders (and they’re not at all pleased with that and believe that they are being asked to address IT operations’ laggardness.) Moreover, a lot of them just do not want to work “with” IT operations.

Adding to all this is the typical noise around “culture change.” Of course, anything meaningful requires a culture change. However, viewing DevOps only from a culture perspective is like the proverbial can’t see the wood for the trees. It requires as much investment in tools, collaboration platforms, cloud services, release management solutions, and development platforms.

So how can organizations make the developers see the value of DevOps? Cut through the theoretical knowledge and the puritanical message on culture change. Or, call it something other than DevOps. Make the developers see beyond the confines of their code to how it fits into the broader scheme of things. Help them understand the challenges of IT operations, and how those challenges are impacting the software delivery and hurting the business (of course, IT operations also needs to understand the same about developers). Make developers link DevOps adoption back to business benefits and the importance of them adopting such principles. And, finally, have them have some hard metrics and data to track and analyze the adoption. Do not make it a benign exercise in vanity, but rather something more meaningful and tangible.

The industry should keep experimenting with newer ways of application delivery, and throwing the DevOps baby out with the bathwater may not fit into that mix.

Software Development: Can We Please Bring the Craft Back?| Sherpas in Blue Shirts

“Are they developers or donkeys?” This rhetorical question, posed to me by one of the technology leaders in a small software firm, pretty much sums up the years-in-the-making mood in the software development world.

Large software vendors to a great extent, and outsourcing providers to a certain extent, are the culprits who have reduced software development to a mechanical job in which developers require more hands than brains. The argument for doing this is the quintessential pursuit of making software development an “engineering” process where things are repeatable, driven by statistics and mathematical rigor rather than creativity.

Advocates of this philosophy believe it is too risky and dangerous to rely on “whimsical coders” to run an organization, and instead require coders who follow processes, structures, and methods. They also purport this approach enables them to “create” many more software developers than if they relied solely rely on “creative developers.” This belief set has resulted into a parallel multi-billion industry of certifications, training, and coaching.

Some of the critics of this approach allege that large software vendors make their developers “robots” to meet their financial and time-to-market commitments. Recognizing this, Pete McBreen’s 2001 book Software Craftsmanship set the ball rolling and questioned this fundamental assumption that making software development an “engineering” process was more valuable than enhancing the craft of the developers. In the same year, The Agile Manifesto also questioned some of the concepts of software engineering and extolled the quality of software developers and their collaboration over tools, processes, and documentation.

Nothing meaningfully changed. As software continued to be used for business efficiency instead of growth, the software development world chose following structure, tools, and frameworks over developers’ fundamental capability to create something meaningful.

However, with digital transformation and the increasing adoption of software across businesses, the value an organization can derive by becoming a “software organization” is many-fold. Software is not only a support for business… in many cases, it has become the business itself. It is the most essential component of both the modern organization and traditional company.

The critical nature of software will push the demand for craftsmanship in software development beyond existing structures and frameworks. Organizations have started to realize that as the impact of software expands to revenue generation and customer delight, they will have limited competitive advantage if all software is made in just one way using the same set of tools and processes. Software built by creative developers is what will drive competitive advantage. And despite the growing adoption of standard packages for back-end operations, most organizations continue to rely on, and will possibly increase adoption of, custom software for client-related activities.

We all understand and appreciate that such drastic change will come in small steps, and the significant required changes may possibly kill some of existing organizations. However, I believe craftsmanship must return in software development to make it an intriguing and interesting field that goes beyond rote copy paste of code to meet some random deadlines.

Of course, not all software will be developed in this manner, and the industry will always require pieces made through structured development. In fact, these may be far more prevalent than “crafted” software. However, technology companies, enterprises with large IT pools, and IT service providers that are unable to realize the importance of making their developers think on feet, and instead dumb them down to achieve self-assigned financial and client objectives, will risk not only losing talent but also market relevance.

Digital Marketing? Digital Will Kill Marketing | Sherpas in Blue Shirts

“When you have a hammer, everything looks like a nail.” This quote from The Psychology of Science easily, and disconcertingly, applies to many of today’s marketers, who are vigorously using digital technologies to “nail” the multiple customer touch points – e.g., context-based services, IoT, mobility, and social collaboration – at their disposal.

Indeed, there is significant vendor sponsored “research,” from the likes of Adobe, IBM, Microsoft, Oracle, Salesforce.com, SAP, and marketing consultants, that hammers home the idea that marketing has no future without digital technologies. Volumes of literature debate and explain how digital technologies are changing the role of traditional e-marketing, and that these technologies are providing the needed ammunition in terms of social conversations, mobile interfaces, and consumer analytics.

But there’s been surprisingly little discussion on whether marketers are overdoing it, whether all marketers are equally equipped to drive such technology-heavy initiatives, and whether digital marketing strategies benefit everyone, all organizations, across all industries. Here’s my take on a couple of these points.

  1. Most marketers do not fundamentally understand technology: For example, they get carried away by Facebook likes, and overwhelmed or too excited by what they see from marketing technology vendors, such as a new content management platform, irrespective of the value delivered. Though there is “hot money” flowing for digital marketing, this should not drive the adoption of digital technologies. For example, the business case of data analytics may become an “availability heuristic bias” without realizing whether it delivers good or bad data, or whether it can produce meaningful insights and business value or just become another academic exercise to please business leaders.

  2. Digital marketing is not about only marketing anymore: Earlier marketers could operate in their ivory towers with somewhat limited integration with the broader organization, as digital technologies were limited to email marketing, surveys, and/or occasional mobility projects. Today, however, with the plethora of customer touch-points, the fundamental shift in consumers’ interaction with a brand, the confluence of big data, the IoT, context-driven services, and mobility, marketers must realize that digital impact is broad-based across the organization. Many different departments, including production, support, supply chain, procurement, operations, customer service, and IT, need to be in synch to drive a meaningful digital marketing strategy. If the entire organization is not geared toward this transformation, the digital marketing efforts will eventually turn into traditional e-marketing, creating little business value.

Effective digital marketing should result in seamless excellent customer engagement, and requires an overhaul of multiple interconnected processes within an organization to avoid actually driving a disconnect with the customer. A plethora of digital technologies cannot improve a bad business process. Therefore, marketers have the difficult task of taking the entire organization together, explaining why process changes are required, how to improve customer touch-points, and how to build a customer experience lifecycle.

But, are marketers capable of doing this? Do they have the needed support and mandate from senior executives? Do they have the required organizational standing and stature to drive these changes? Can they fathom and swim across the political landscape and inertia of their organization?

Most importantly, marketers must keep the customer top of mind when considering use of digital technologies. The reality is that extreme technology leverage may confuse, frustrate, and overwhelm the customer. The branding message may get convoluted, confusing, and irrelevant. Though increasingly marketers are becoming more tech savvy, they should never forget their role is not to adopt latest digital technology but to serve their customers.

Digital channels are means to an end, not the end by themselves. For marketers, it is easy to get carried away by believing new technology is “digital marketing.” But what they may not realize is that “digital” may actually be killing marketing.


Photo credit: Flickr

The Internet of Things and the March of the As-A-Service Economy | Sherpas in Blue Shirts

The irresistible force paradox asks, “What happens when an unstoppable force meets an immovable object?” I think it’s the opposite when it comes to the Internet of Things (IoT) and the already booming as-a-service economy: “What happens when an unstoppable force befriends an unstoppable object?”

Most of the discussion to date around the as-a-service economy has been focused on cloud services, SaaS, and the likes of Uber. At the heart of this economy are the fundamental premises that customers – either business or consumer – can “rent” rather than own the product or service, and can do so, on demand, when they need it, paying as they go.

Although wishing for the utopian as-a-service model may be a futile exercise, the IoT can initiate meaningful models for heavy investment industries and quite a few consumer-focused businesses, and as technologists we should continue to push the envelope.

Let’s step back and think about how the IoT can push the sharing economy to its potential. Can product manufacturers leverage IoT principles, and create a viable technical and commercial model where idle assets are not priced, or are priced at a lower rate, thus saving customers millions of dollars? This would, of course, require collaboration between customers and product manufacturers to enable insight into how, when, and how much a customer consumes the product. But consider the possibilities!

One example is the car-for-hire market. Could a customer’s wearable device communicate with a reserved car, notifying it of approximate wait time until it’s required, enabling the vehicle to be productively deployed somewhere else, in turn enabling the business to offer lower prices to the customer and reduce the driver’s idle time? I think the technology is there, and although the task is humongous and with uncertain returns, I am sure someone, (ZipCar?) will experiment with this model at scale in the near future.

Another example is the thousands of small healthcare labs that cannot afford to own a blood analyzer. Innovative manufacturers of these machines could leverage IoT principles to analyze the blood test patterns of individual labs, and offer them a subscription model by which they are charged per blood test executed, or offered a bundled price of $X per 100 blood tests (much like HP’s Instant Ink offering.)

The IoT has the potential to really bring upon us the power of a sharing economy. In the near-term, businesses face challenges in developing a viable commercial and support model. However, they must overcome this in order for society at-large to truly benefit from this once-in-a-lifetime opportunity. They must remember that most industry disruption these days comes from outside the industry. If they don’t cannibalize themselves, someone else will. Thus, as the traditional competitive strategy levers are fast losing relevance, the IoT most definitely should be an integral part of their strategy.


Photo credit: Flickr

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.