Stack Theory Doesn’t Stack Up

Extraordinary People.jpeg

Christopher Mims’ article in the Wall Street Journal today on why big companies get disrupted by others doesn’t make a lot of sense to me. 

He discusses the “Stack Fallacy” of Anshu Sharma a venture capitalist that it “is the mistaken belief that it is trivial to build the layers above yours.”

Mims explains that the stack is like a “layer cake of technology”–where one layer is built on another.

Similar to the OSI technology model where there are architecture layers for physical, data, network, application and so on. 

Basically, Mims explains that tech companies can only invent at a single layer of technology (or below). 

But when companies try to invent up the stack, they fail.

Here’s why…

Mims says that companies despite their size and resources can’t innovate up the stack because they don’t understand the users there. 

But this doesn’t stack up to me. 

Companies can and do use their resources to study and understand what users want up the food chain and what they can’t easily build, they can acquire. 

Apple successfully went from a iPod and iTunes music player and song store to producing a highly sophisticated and integrated iPhone and Apps store where music is just an afterthought.

Similarly, IBM went from being primarily a mainframe and desktop company to being a top-tier consulting firm with expertise in cloud, mobile, social, artificial intelligence, and analytics computing. 

But it isn’t easy for a company to change. 

And to me, it’s not because they can’t understand what users want and need. 

Rather, it is because of something we’ve all heard of called specialization. 

Like human beings, even extraordinary ones, companies are specialized and good at what they are good at, but they aren’t good at everything. 

A great example of this was when NBA superstar, Michael Jordan, tried to take his basketball talents and apply it to baseball…he was “bobbling easy flies and swatting at bad pitches” in the minor leagues. 

As even kindergarteners are taught that “Everyone is good at something, but no one is good at everything.”

Companies have a specific culture, a specific niche, a specific specialization and expertise.

And to go beyond that is very, very difficult…as IBM learned, it requires nothing less than a transformation of epic proportions. 

So I think Mims is wrong that companies can’t undertstand what users want in areas up the innovation stack, but rather it’s a monumental change management challenge for companies that are specialized in one thing and not another. 

Welcome to the world of Apple after Steve Jobs and his iPhone and to the the recent 25% decline in their stock price with investors and customers anxiously waiting for the possible but not certain next move up the technology stack. 😉

(Source Photo: Andy Blumenthal)

Web 1-2-3

Ushering In Web 3.0

The real cloud computing is not where we are today.

Utilizing infrastructure and apps on demand is only the beginning.

What IBM has emerging that is above the other cloud providers is the real deal, Watson, cognitive computing system.

In 2011, Watson beat the human champions of Jeopardy, today according to the CNBC, it is being put online with twice the power.

Using computational linguistics and machine learning, Watson is becoming a virtual encyclopedia of human knowledge and that knowledge-base is growing by the day.

But moreover, that knowledge can be leveraged by cloud systems such as Watson to link troves of information together, process it to find hidden meanings and insights, make diagnoses, provide recommendations, and generally interact with humans.

Watson can read all medical research, up-to-date breakthroughs in science, or all financial reports and so on and process this to come up with information intelligence.

In terms of computational computing, think of Apple’s Siri, but with Watson, it doesn’t just tell you where the local pizza parlors are, it can tell you how to make a better pizza.

In short, we are entering the 3rd generation of the Internet:

Web 1.0 was as a read-only, Web-based Information Source. This includes all sorts of online information available anytime and anywhere. Typically, organizational Webmasters publishing online content to the masses.

Web 2.0 is the read-write, Participatory Web. This is all forms of social computing and very basic information analytics. Examples include: email, messaging, texting, blogs, twitter, wikis, crowdsourcing, online reviews, memes, and infographics.

Web 3.0 will be think-talk, Cognitive Computing. This incorporates artificial intelligence and natural language processing and interaction. Examples: Watson, or a good-natured HAL 9000.

In short, it’s one thing to move data and processing to the cloud, but when we get to genuine artificial intelligence and natural interaction, we are at all whole new computing level.

Soon we can usher in Kurzweil’s Singularity with Watson leading the technology parade. 😉

(Source Photo: Andy Blumenthal)

>Watson Can Swim



With IBM’s Watson beating the pants off Jennings and Rutter in Jeopardy, a lot of people want to know can computers can really think?

Both sides of this debate have shown up in the last few weeks in some fascinating editorials in the Wall Street Journal.

On one hand, on 23 February 2011, John Searle of the University of California, Berkeley wrote that “IBM invented an ingenious program–not a computer that can think.” According to Searle, Watson (or any computer for that matter) is not thinking but is simulating thinking.

In his most passionate expression, Searle exclaims: “Watson did not understand the questions, nor its answers, not that some of its answers were right and some wrong, not that it was playing a game, nor that it won–because it doesn’t understand anything.

Today, on 14 March 2011 on the other hand, Stephen Baker, author of “Final Jeopardy–Man vs. Machine and the Quest to Know Everything” took the opposing view and stated: “Watson is an early sighting of a highly disruptive force…one that can handle [information] jobs held by people.

To the question of whether machine thinking is “real” thinking? Baker quotes David Ferrucci, IBM’s chief scientist who when asked if Watson can think, responded “Can a submarine swim?”

The analogy is a very good one.

Just because a submarine doesn’t swim like a fish or a person, doesn’t mean it can’t swim. In fact and in a sense, for the very reason that it doesn’t swim exactly like a fish or person, it actually can swim better.

So too with computers, just because they don’t “think” like humans doesn’t mean they don’t think. They just think differently and again in sense, maybe for the very same reason, in certain ways they can think better.

How can a computer sometimes think better than a person? Well here are just some possible examples (non-exhaustive):

– Computers can evaluate options purely based on facts (and not get “bogged down” in emotions, conflict, ego, and so forth like human beings).
– Computers can add processing power and storage at the push of button, like in cloud computing (people have the gray matter between their ears that G-d gave them, period).
– Computers do not tire by a problem–they will literally mechanically keep attacking it until solved (like cracking a password).
– Computers can be upgraded over time with new hardware, software, and operating systems (unlike people who age and pass).

At the same time, it is important to note that people still trump computers in a number of facets:

– We can evaluate things based on our conscience and think in terms of good and evil, and faith in a higher power (a topic of a prior blog).
– We can care for one another–especially children and the needy–in a altruistic way that is not based on information or facts, but on love.
– We can work together like ants in a colony or bees in a hive or crowdsourcing on- or off-line to get large jobs done with diversity and empowerment.
– We are motivated to better ourselves and our world–to advance ourselves, families, and society through continuous improvement.

Perhaps, like the submarine and the fish, both of which can “swim” in their own ways, so too both computers and people can “think”–each in their own capacity. Together, computers and people can augment the other–being stronger and more effective in carrying out the great tasks and challenges that confront us and await.

>Machine, Checkmate.


It’s the eternal battle of Man vs. Machine—our biggest fear and greatest hope—which is ultimately superior?

On one hand, we are afraid of being overtaken by the very technology we build, and simultaneously, we are hopeful at what ailments technology can cure and what it can help us achieve.

In spite of our hopes and fears, the overarching question is can we construct computers that will in fact surpass our own distinct human capabilities?

This week IBM’s Supercomputer Watson will face off against two of the all-time-greatest players, Ken Jennings and Brad Rutter in a game of Jeopardy—at stake is $1.5 million in prize money.

Will we see a repeat of technology defeating humankind as happened in 1997, when IBM’s Supercomputer at the time, Deep Blue, beat Garry Kasparov, world-champion, in chess?

While losing some games—whether chess or Jeopardy—is perhaps disheartening to people and their mental acuity; does it really take away from who we are as human beings and what makes us “special” and not mere machines?

For decades, a machine’s ability to act “more human” than a person has been testing the ever-thinning divide between man and machine.

An article in The Atlantic (March 2011) called Mind vs. Machine exposes the race to build computers that can think and communicate like people.

The goal is to use artificial intelligence in machines to rival real intelligence in humans and to fool a panel of judges at the annual meeting for the Loebner Prize and pass the Turing test.

Alan Turing in his 1950’s paper “Computing Machinery and Intelligence” asked whether machines can think? He posited that if a judge could not tell machine from human in text-only communication (to mask the difference in sounds being machines and humans), then the machine was said to win!

Turing predicted that by the year 2000, computers would be able to fool 30% of human judges after five minutes of conversations.” While this has not happened, it has come close (missing by only one deception) in 2008 with an AI program called Elbot.

Frankly, it is hard for me to really imagine computers that can talk with feelings and expressiveness—based on memories, tragedies, victories, hopes, and fears—the way people do.

Nevertheless, computer programs going back to the Eliza program in 1964 have proven very sophisticated and adept as passing for human, so much so that “The Journal of Nervous and Mental Disease” in 1966 said of Eliza that: “several hundred patients an hour could be handled by a computer system designed for this purpose.” Imagine that a computer was proposed functioning as a psychotherapist already 45 years ago!

I understand that Ray Kurzweil has put his money on IBM’s Watson for the Jeopardy match this week, and that certainly is in alignment with his vision of “The Singularity” where machines overtake humans in an exponentially accelerating advancement of technology toward “massive ultra-intelligence.”

Regardless of who wins Jeopardy this week—man or machine—and when computers finally achieve the breakthrough Turing test, I still see humans as distinct from machines, not in their intellectual or physical capabilities, but ultimately in the moral (or some would call it religious) conscience that we carry in each one of us. This is our ability to choose right from wrong—and sometimes to choose poorly.

I remember learning in Jewish Day School (“Yeshiva”) that humans are a combination—half “animal” and half “soul”. The animal part of us lusts after all the is pleasurable, at virtually any cost, but the soul part of us is the spark of the divine that enables us to choose to be more—to do what’s right, despite our animal cravings.

I don’t know of any computer, super or not, that can struggle between pleasure and pain and right and wrong, and seek to grow beyond it’s own mere mortality through conscious acts of selflessness and self-sacrifice.

Even though in our “daily grind,” people may tend to act as automatons, going through the day-to-day motions virtually by rote, it is important to rise above the machine aspect of our lives, take the “bigger picture” view and move our lives towards some goals and objectives that we can ultimately be proud of.

When we look back on our lives, it’s not how successful we became, how much money and material “things” we accumulated—these are the computerized aspects of our lives that we sport. Rather, it’s the good we do for our others that will stay behind long after we are gone. So whether the computer has a bigger database, faster processor, and better analytics—good for it—in the end, it has nothing on us humans.

Man or machine—I say machine, checkmate!

>We Need A Grand Vision—Let It Be Smart!

>We can build systems that are stand-alone and require lots of hands-on monitoring, care, and feeding or we can create systems that are smart—they are self-monitoring providing on-going feedback, and often self-healing and they help ensure higher levels of productivity and up-time.

According to the Wall Street Journal, 17 February 2009, smart technology is about making systems that are “intelligent and improve productivity in the long run…they [makes use of] the latest advances in sensors, wireless communications and computing power, all tied together by the Internet.”

As we pour hundreds of billions of dollars of recovery funds into fixing our aging national infrastructure for roads, bridges, and the energy grid—let’s NOT just fix the potholes and reinforce the concrete girders and have more of the same. RATHER, let’s use the opportunity to leap forward and build a “smarter,” more cost–effective, and modernized infrastructure that takes us, as nation, to the next playing-level in the global competitive marketplace.”

Smart transportation—the “best way to fight congestion is intelligent transportation systems, such as roadside sensors to measure traffic and synchronize traffic lights to control the flow of vehicles…real time information about road conditions, traffic jams and other events.” Next up is predictive technology to tell where jams happen before they actually occur and “roadways that control vehicles and make ‘driving’ unnecessary.”

Smart grid—this would provide for “advanced electronic meters that send a steady stream of information back to the utility” to determine power outages or damage and reroute power around trouble areas. It also provides for consumer portals that show energy consumption of major appliances, calculate energy bills under different usage scenarios and allow consumers to moderate usage patterns. Additionally, a smart grid would be able to load balance energy from different sources to compensate for peaks and valleys in usage of alternative energy sources like solar and wind.

Smart bridges—this will provide “continuous electronic monitoring of bridges structures using a network of sensors at critical points.” And there are 600,000 bridges in the U.S. As with other smart technologies, it can help predict problems before they occur or are “apparent to a human inspector…this can make the difference between a major disaster, a costly retrofit or a minor retrofit.”

Smart technology can be applied to just about everything we do. IBM for example, talks about Smart Planet and applying sensors to our networks to monitor computer and electronic systems across the spectrum of human activity.

Building this next level of intelligence into our systems is good for human safety, a green environment, productivity, and cost-efficiency.

In the absence of recovery spending on a grand vision such as a cure for cancer or colonization of Mars, at the VERY least, when it comes to our national infrastructure, let’s spend with a vision of creating something better—“Smarter”–for tomorrow than what we have today.

>Small Is In and Enterprise Architecture


Remember the saying, “good things come in small packages?” In enterprise architecture big is out and small is in. This applies not only to the obvious consumer electronics market, where PDAs, phones, chips, and everything electronic seems to be getting smaller and sleeker, but also to the broader computing market (such as the transition from mainframe to distributed computing) and even to the storage device market.

The Wall Street Journal, 10 January 2008, reports that Mr. Moshe Yanai “was responsible for one of IBM’s defeats in the 1990’s, “when he designed the computer storage disks for EMC Corp. that displaced IBM’s in the data centers around the globe.”

How did Mr. Yanai do this?

He did this by going small. “One point of the architecture is simplicity of management of data…with his architecture, you just add more pieces.”

In creating Symmetrix disk drives, Mr. Yanai developed storage drives that were “cheaper, faster, and more reliable than IBM drives…he pioneered a technology called RIAD-short for redundant arrays of inexpensive disks—that linked dozens of the kinds if disk drives used in PCs together to cheaply provide the same storage capacity as refrigerator-sized drives from IBM. Raid technology has since become a standard throughout the storage industry.”

The small disk drives of EMC beat out the big drives from IBM, jus like the PCs (of Dell and HP) beat out the mid-range and mainframes computers of IBM.

Mr. Yanai, a one time Israeli tank commander, is a User-centric enterprise architect. He recognized the needs of his users for smaller, cheaper, and faster devices and he delivered on this. Moreover, Mr. Yanai put the customer first not only in terms of product design and development, but also in terms of customer service. “Mr. Yanai was known as an expert engineer who also could talk to customers and solve their problems. Mr. Yanai put telephones in each storage device and programmed them to ‘phone home’ when it sensed a part was in danger of failing.”

While Mr. Yanai was removed from his top engineering role at EMC, his company XIV corp. has been bought out by IBM and “locked up” his services. IBM may be a little slow (due to its size—a lumbering giant), but they are not poor or stupid and they can buy the competition. Anyone remember Lotus Corp?

From a User-centric EA perspective, the small and agile often wins out over the large and stodgy. It is a lesson thousands of years old, like the biblical tale of David vs. Goliath, when little David defeats the monstrous Goliath. Small is nimble and big is cumbersome. This is the same thing the U.S. military has found out and is converting to smaller, more agile, and mobile forces. EA needs to do the same in focusing on smaller, faster, cheaper computing devices and on simpler, more streamlined processes. Small is truly bigger than big!