Simplify Me

Technology Complexity.jpeg

So here’s the monitor in the “modern” and beautiful Fort Lauderdale International airport. 


Can you see the number of electrical plugs, wires, connections, input/output ports, etc. on this device?


Obviously, it is comical and a farce as we near the end of 2015. 


Think about the complexity in building this monitor…in connecting it…in keeping it operational.


Yes, we are moving more and more to cellular and wireless communications, to miniaturization, to simple and intuitive user interfaces, to paperless processing, to voice recognition, to natural language processing, and to artificial intelligence.


But we are not there yet.


And we need to continue to make major strides to simplify the complexity of today’s technology. 


– Every technology device should be fully useful and usable by every user on first contact. 


– Every device should learn upon interacting with us and get better and better with time. 


– Every device should have basic diagnostic and self-healing capability. 


Any instructions that are necessary should be provided by the device itself–such as the device telling you step by step what to do to accomplish the task at hand–no manual, no Google instructions, no Siri questions…just you and the device interacting as one. 


User friendly isn’t enough anymore…it should be completely user-centric, period. 


Someday…in 2016 or beyond, we will get there, please G-d. 😉


(Source Photo: Andy Blumenthal)

Ex Machina Will Even Turn The Terminator

Terminator
So this was a really cool display at the Movie theater yesterday…



They had this head of the Terminator in a enclosed case and roped off. 



Shiny metal alloy skull, buldging bright evil red eyes, and really grotesque yellowed teeth. 



This certainly gets the attention of passerbys for the upcoming new movie, Terminator Genisys (coming out July 1). 



Anyway, Terminator is the ugly dude especially when compared with the robot/artificial intelligence of Ava in Ex Machina that we saw yesterday. 



The Turing test is nothing for Ava!



She can not only fool them as to her humanity, but also outmanuever them with her wit, sexuality, and a good dose of deceit and manipulation. 



Frankly, I think AI Ava could even turn the terible Terminator to her side of things–my bet is that movie to come in 2017. 



(Source Photo: Andy Blumenthal)

Talk To The Hand

Hand
So you know the saying “Talk to the hand, because the face ain’t home…”?



Well IPSoft has an artificial intelligence agent called Amelia that handles service requests. 



Instead of talking to a human customer service rep, you get to talk to a computer. 



The question is whether Amelia is like talking to a hand or is someone really home when using IA to adroitly address your service issues?



Now apparently, according to the Wall Street Journal, this computer is pretty smart and can ingest every single manual and prior service request and learn how to answer a myriad of questions from people. 



On one hand, maybe you’ll get better technical knowledge and more consistent responses by talking to a computerized service representative.



But on the other hand, if the interactive voice response systems with the dead end menus of call options, endless maze of “If you want to reach X, press Y now” along with all the disconnects after being on for 10 minutes already are any indication of what this, I am leery to say the least. 



The Telegraph does says that Amelia can service customers in 20 languages and after 2 months, can resolve 64% of “the most common queries” independently, so this is hopeful and maybe even inspiring of what is to come. 



These days, based on how much time we spend online in the virtual world, I think most people would actually prefer to talk to a knowledgeable computer than a smart alec human who doesn’t want to be handling annoying customer calls all day, anyway. 



The key to whether Amelia and her computerized brothers and sisters of the future will be successful is not only how quickly they can find the correct answer to a problem, but also how well they can understand and address new issues that haven’t necessarily come up the same way before, and how they handle the emotions of the customer on the line who wishes they didn’t have the problem needing this call to begin with. 😉



(Source Photo: here with attribution to Vernon Chen)

Dexterous Drones


Ok, after the da Vinci System that uses robotics to conduct surgeries this many not seem like such a feat, but think again.



While da Vinci is fully controlled by the surgeon, this Drone from Drexel University that can turn valves, or door knobs and other controls, is on the road to doing this autonomously. 



Think of robots that can manipulate the environment around them not on a stationary assembly line or doing repetitive tasks, but actually interacting real-time to open/close, turn things on/off, adjust control settings, pick things up/move them, eventually even sit at a computer or with other people–like you or I–and interface with them. 



Drones and robots will be doing a lot more than surveillance and assembly line work–with artifical intelligence and machine learning, they will be doing what we do–or close enough. 😉

Web 1-2-3

Ushering In Web 3.0

The real cloud computing is not where we are today.

Utilizing infrastructure and apps on demand is only the beginning.

What IBM has emerging that is above the other cloud providers is the real deal, Watson, cognitive computing system.

In 2011, Watson beat the human champions of Jeopardy, today according to the CNBC, it is being put online with twice the power.

Using computational linguistics and machine learning, Watson is becoming a virtual encyclopedia of human knowledge and that knowledge-base is growing by the day.

But moreover, that knowledge can be leveraged by cloud systems such as Watson to link troves of information together, process it to find hidden meanings and insights, make diagnoses, provide recommendations, and generally interact with humans.

Watson can read all medical research, up-to-date breakthroughs in science, or all financial reports and so on and process this to come up with information intelligence.

In terms of computational computing, think of Apple’s Siri, but with Watson, it doesn’t just tell you where the local pizza parlors are, it can tell you how to make a better pizza.

In short, we are entering the 3rd generation of the Internet:

Web 1.0 was as a read-only, Web-based Information Source. This includes all sorts of online information available anytime and anywhere. Typically, organizational Webmasters publishing online content to the masses.

Web 2.0 is the read-write, Participatory Web. This is all forms of social computing and very basic information analytics. Examples include: email, messaging, texting, blogs, twitter, wikis, crowdsourcing, online reviews, memes, and infographics.

Web 3.0 will be think-talk, Cognitive Computing. This incorporates artificial intelligence and natural language processing and interaction. Examples: Watson, or a good-natured HAL 9000.

In short, it’s one thing to move data and processing to the cloud, but when we get to genuine artificial intelligence and natural interaction, we are at all whole new computing level.

Soon we can usher in Kurzweil’s Singularity with Watson leading the technology parade. 😉

(Source Photo: Andy Blumenthal)

What If They Can Read Our Redactions?

What If They Can Read Our Redactions?

The New Yorker has a fascinating article about technology advances being made to un-redact classified text from government documents.

Typically, classified material is redacted from disclosed documents with black bars that are technologically “burnt” into the document.

With the black bars, you are not supposed to be able to see/read what is behind it because of the sensitivity of it.

But what if our adversaries have the technology to un-redact or un-burn and autocomplete the words behind those black lines and see what it actually says underneath?

Our secrets would be exposed! Our sensitive assets put at jeopardy!

Already a Columbia University professor is working on a Declassification Engine that uses machine learning and natural language processing to determine semantic patterns that could give the ability “to predict content of redacted text” based on the words and context around them.

In the case, declassified information in the document is used in aggregate to “piece together” or uncover the material that is blacked out.

In another case prior, a doctoral candidate at Dublin City University in 2004, used “document-analysis technologies” to decrypt critical information related to 9/11.

This was done by also using syntax or structure and estimating the size of the word blacked out and then using automation to run through dictionary words to see if it would fit along with another “dictionary-reading program” to filter the result set to the likely missing word(s).

The point here is that with the right technology redacted text can be un-redacted.

Will our adversaries (or even allies) soon be able to do this, or perhaps, someone out there has already cracked this nut and our secrets are revealed?

(Source Photo: here with attribution to Newspaper Club)

Can a Computer Run the Economy?

Machine_learning

I am nottalking about socialism or totalitarianism, but about computers and artificial intelligence.

For a long time, we have seen political infighting and finger-pointing stall progress on creating jobs, balancing trade, taming the deficits, and sparking innovation.

But what if we somehow took out the quest for power and influence from navigating our prosperity?

In politics, unfortunately no one seems to want to give the other side the upper hand–a political win with voters or a leg-up on with their platform.

But through the disciplines of economics, finance, organizational behavior, industrial psychology, sociology, geopolitics, and more–can we program a computer to steer the economy using facts rather than fighting and fear?

Every day, we need to make decisions, big and small, on everything from interests rates, tax rates, borrowing, defense spending, entitlements, pricing strategies, regulating critical industries, trade pacts, and more.

Left in the hands of politicians, we inject personal biases and even hatreds, powerplays, band-standing, bickering, and “pork-barrel” decision making, rather than rational acting based on analysis of alternatives, cost-benefits, risk management, and underlying ethics.

We thumb our noises (rightfully) at global actors on the political stages, saying who is rational and who is perhaps just plain crazy enough to hit “the button.”

But back here at home, we can argue about whether or not the button of economic destructionism has already been hit with the clock ticking down as the national deficit spirals upward, education scores plummet, and jobs are lost overseas?

Bloomberg BusinessWeek(30 August 2012) suggests using gaming as a way to get past the political infighting and instead focus on small (diverse) groups to make unambiguous trade-off decisions to guide the economy rather than “get reelected”–the results pleasantly were cooperation and collaboration.

Yes, a game is just a game, but there is lesson that we can learn from this–economic decision-making can be made (more) rationally by rewarding teamwork and compromise, rather than by an all or nothing, fall on your sword, party against party, winner takes no prisoner-politics.

I would suggest that gaming is a good example for how we can improve our economy, but I can see a time coming where “bid data,” analytics, artificial intelligence, modeling and simulation, and high-performance computing, takes this a whole lot further–where computers, guided and inspired by people, help us make rational economic choices,thereby trumping decisions by gut, intuition, politics, and subjective whims .

True, computers are programmed by human beings–so won’t we just introduce our biases and conflict into the systems we develop and deploy?

The idea here is to filter out those biases using diverse teams of rational decision-makers, working together applying subject matter expertise and best practices and then have the computers learn over time in order to improve performance–this, separate from the desire and process to get votes and get elected.

Running the economy should not be about catering to constituencies, getting and keeping power for power sakes, but rather about rational decision-making for society–where the greatest good is provided to the greatest numbers, where the future takes center stage, where individuals preferences and rights are respected and upheld, and where ethics and morality underpin every decision we make.

The final question is whether we will be ready to course-correct with collaboration and advances in technology to get out of this economic mess before this economic mess gets even more seriously at us?

(Source Photo: herewith attribution to Erik Charlton)

>Terascale Computing and Enterprise Architecture

>In MIT Technology Review, 26 September 2007, in an article entitled “The Future of Computing, According to Intel” by Kate Green, the author describes terascale computing— computational power beyond a teraflop (a trillion calculations per second).

“One very important benefit is to create the computing ability that’s going to power unbelievable applications, both in terms of visual representations, such as this idea of traditional virtual reality, and also in terms of inference. The ability for devices to understand the world around them and what their human owners care about.”

How do computer learn inference?

“In order to figure out what you’re doing, the computing system needs to be reading data from sensor feeds, doing analysis, and computing all the time. This takes multiple processors running complex algorithms simultaneously. The machine-learning algorithms being used for inference are based on rich statistical analysis of how different sensor readings are correlated.”

What’s an example of how inference can be used in today’s consumer technologies?

For example, sensors in your phone could determine whether you should be interrupted for a phone call. “The intelligent system could be using sensors, analyzing speech, finding your mood, and determining your physical environment. Then it could decide [whether you need to take a call].”

What is machine learning?

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to “learn.” At a general level, there are two types of learning: inductive and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets. The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. (Wikipedia)

Where’s all this computational power taking us?

Seems like we’re moving ever closer to the reality of what was portrayed as HAL 9000, the supercomputer from 2001: A Space Odyssey—HAL was“the pinnacle in artificial machine intelligence, with a remarkable, error-free performance record…designed to communicate and interact like a human, and even mimic (or reproduce) human emotions.” (Wikipedia) An amazing vision for a 1968 science fiction film, no?

From a User-centric EA perspective, terascale computing, machine learning, and computer inference represent tremendous new technical capabilities for our organizations. They are a leap in computing power and end-user application that have the capability to significantly alter our organizations business activities and processes and enable better, faster, and cheaper mission execution.