Amazon Delivery – By Crunk-Car, If You Like

Amazon Delivery - By Crunk-Car, If You Like

Jeff Bezos of Amazon is one very smart guy and when he announces that he is interested in drones delivering your next online order that makes for a lot of grandstanding.

But really how is a dumb drone delivering an order of diapers or a book so exciting.

Aside from putting a lot of delivery people at USPS, UPS, and FedEx out of work, what does the consumer get out of it?

Honestly, I don’t care if if the delivery comes by Zike-Bike, Crunk-Car, Zumble-Zay, Bumble-Boat, or a Gazoom, as Dr. Seuss would say–I just care that it gets here fast, safely, and cheaply.

Will a drone be able to accomplish those things, likely–so great, send the drone over with my next order, but this doesn’t represent the next big technological leap.

It doesn’t give us what the real world of robotics in the future is offering: artificial intelligence, natural language processing, augmentation of humans, or substitution by robots altogether, to do things stronger, faster, and more precisely, and even perhaps companionship to people.

Turning surveillance and attack drones into delivery agents is perhaps a nice gesture to make a weapon into an everyday service provider.

And maybe the Octocopters even help get products to customers within that holy grail, one day timeframe, that all the retailers are scampering for.

It’s certainly a great marketing tool–because it’s got our attention and we’re talking about it.

But I’ll take a humanoid robot sporting a metallic smile that can actually interact with people, solve problems, and perform a multitude of useful everyday functions–whether a caregiver, a bodyguard, or even a virtual friend (e.g. Data from Star Trek)–over a moving thingamajig that Dr. Seuss foresaw for Marvin K. Mooney. ๐Ÿ˜‰

Web 1-2-3

Ushering In Web 3.0

The real cloud computing is not where we are today.

Utilizing infrastructure and apps on demand is only the beginning.

What IBM has emerging that is above the other cloud providers is the real deal, Watson, cognitive computing system.

In 2011, Watson beat the human champions of Jeopardy, today according to the CNBC, it is being put online with twice the power.

Using computational linguistics and machine learning, Watson is becoming a virtual encyclopedia of human knowledge and that knowledge-base is growing by the day.

But moreover, that knowledge can be leveraged by cloud systems such as Watson to link troves of information together, process it to find hidden meanings and insights, make diagnoses, provide recommendations, and generally interact with humans.

Watson can read all medical research, up-to-date breakthroughs in science, or all financial reports and so on and process this to come up with information intelligence.

In terms of computational computing, think of Apple’s Siri, but with Watson, it doesn’t just tell you where the local pizza parlors are, it can tell you how to make a better pizza.

In short, we are entering the 3rd generation of the Internet:

Web 1.0 was as a read-only, Web-based Information Source. This includes all sorts of online information available anytime and anywhere. Typically, organizational Webmasters publishing online content to the masses.

Web 2.0 is the read-write, Participatory Web. This is all forms of social computing and very basic information analytics. Examples include: email, messaging, texting, blogs, twitter, wikis, crowdsourcing, online reviews, memes, and infographics.

Web 3.0 will be think-talk, Cognitive Computing. This incorporates artificial intelligence and natural language processing and interaction. Examples: Watson, or a good-natured HAL 9000.

In short, it’s one thing to move data and processing to the cloud, but when we get to genuine artificial intelligence and natural interaction, we are at all whole new computing level.

Soon we can usher in Kurzweil’s Singularity with Watson leading the technology parade. ๐Ÿ˜‰

(Source Photo: Andy Blumenthal)

What If They Can Read Our Redactions?

What If They Can Read Our Redactions?

The New Yorker has a fascinating article about technology advances being made to un-redact classified text from government documents.

Typically, classified material is redacted from disclosed documents with black bars that are technologically “burnt” into the document.

With the black bars, you are not supposed to be able to see/read what is behind it because of the sensitivity of it.

But what if our adversaries have the technology to un-redact or un-burn and autocomplete the words behind those black lines and see what it actually says underneath?

Our secrets would be exposed! Our sensitive assets put at jeopardy!

Already a Columbia University professor is working on a Declassification Engine that uses machine learning and natural language processing to determine semantic patterns that could give the ability “to predict content of redacted text” based on the words and context around them.

In the case, declassified information in the document is used in aggregate to “piece together” or uncover the material that is blacked out.

In another case prior, a doctoral candidate at Dublin City University in 2004, used “document-analysis technologies” to decrypt critical information related to 9/11.

This was done by also using syntax or structure and estimating the size of the word blacked out and then using automation to run through dictionary words to see if it would fit along with another “dictionary-reading program” to filter the result set to the likely missing word(s).

The point here is that with the right technology redacted text can be un-redacted.

Will our adversaries (or even allies) soon be able to do this, or perhaps, someone out there has already cracked this nut and our secrets are revealed?

(Source Photo: here with attribution to Newspaper Club)

From Holocaust To Holograms

From Holocaust To Holograms

My father told me last week how my mom had awoken in the middle of night full of fearful, vivid memories of the Holocaust.

In particular, she remembers when she was just a six year-old little girl, walking down the street in Germany, and suddenly the Nazi S.S. came up behind them and dragged her father off to the concentration camp, Buchenwald–leaving her alone, afraid, and crying on the street. And so started their personal tale of oppression, survival, and escape.

Unfortunately, with an aging generation of Holocaust survivors–soon there won’t be anyone to tell the stories of persecution and genocide for others to learn from.

In light of this, as you can imagine, I was very pleased to see the University of Southern California (USC) Institute for Creative Technologies (ICT) and the USC Shoah Foundation collaborating on a project called “New Dimensions In Testimony” to use technology to maintain the enduring lessons of the Holocaust into the future.

The project involves developing holograms of Holocaust survivors giving testimony about what happened to them and their families during this awful period of discrimination, oppression, torture, and mass murder.

ICT is using a technology called Light Stage that uses multiple high-fidelity cameras and lighting from more than 150 directions to capture 3-D holograms.

There are some interesting videos about Light Stage (which has been used for many familiar movies from Superman to Spiderman, Avatar, and The Curious Case of Benjamin Button) at their Stage 5 and Stage 6 facilities.

To make the holograms into a full exhibit, the survivors are interviewed and their testimony is combined with natural language processing, so people can come and learn in a conversational manner with the Holocaust survivor holograms.

Mashable reports that these holograms may be used at the U.S. Holocaust Museum in Washington, D.C. where visitors will talk “face-to-face” with the survivors about their personal experiences–and we will be fortunate to hear it directly from them. ๐Ÿ˜‰

(Photo from USC ICT New Dimensions In Technology)

Challenging The Dunbar 150

Kids

Today, Facebook announced it’s new search tool called Graph Search for locating information on people, places, interests, photos, music, restaurants, and more.ย 

Graph Search is still in beta, so you have to sign up in Facebook to get on the waiting list to use it.ย 

But Facebook is throwing down the gauntlet to Google by using natural language queries to search by just asking the question in plain language like: “my friends that like Rocky” and up comes those smart ladies and gents.ย 

But Graph Search is not just a challenge to Google, but to other social media tools and recommendation engines like Yelp and Foursquare, and even LinkedIn, which is now widely used for corporate recruiting.ย 

Graph Search uses the Bing search engine and it’s secret sauce according to CNNย is that is culls information from over 1 billion Facebook accounts, 24 billion photos, and 1 trillion connections–so there is an enormous and growing database to pull from.ย 

So while the average Facebook user has about 190 connections, some people have as many as 5,000 and like the now antiquated business card file or Rolodex, all the people in your social network can provide important opportunities to learn and share. And while in the aggregate six degrees of separation, none of us are too far removed from everyone else anyway, we can still only Graph Search people and content in our network.

Interestingly enough, while Facebook rolls out Graph Search to try to capitalize on its treasure trove personal data and seemingly infinite connections,ย Bloomberg BusinessWeek (10 January 2013) ran an article called “The Dunbar Number” about how the human brain can only handle up to “150 meaningful relationships.”

Whether hunter-gather clans, military units, corporate divisions, or an individual’s network of family, friends, and colleagues–our brain “has limits” and 150 is it when it comes to substantial real world or virtual relationships–our brains have to process all the facets involved in social interactions from working together against outside “predators” to guarding against “bullies and cheats” from within the network.ย 

According to Dunbar, digital technologies like the Internet and social media, while enabling people to grow their virtual Rolodex, does not really increase our social relationships in the real meaning of the word.ย 

So with Graph Search, while you can mine your network for great talent, interesting places to visit, or restaurants to eat at, you are still fundamentally interacting with your core 150 when it comes to sharing the joys and challenges of everyday life. ๐Ÿ˜‰

(Source Photo: Andy Blumenthal)

Robot Firefighters To The Rescue

Meet Octavia, a new firefighting robot from the Navy’s Laboratory for Autonomous Systems Research (LASR) in Washington, D.C.Octavia and her brother Lucas are the the latest in firefighting technology.These robots can hear commands, see through infrared cameras, identify patterns, and algorithmically make decisions on diverse information sets.While the current prototypes move around like a Segway, future versions will be able to climb ladders and get around naval vessels.It is pretty cool seeing this robot spray flame retardant to douse the fire, and you can imagine similar type robots shooting guns on the front line at our enemies.

Robots are going to play an increasingly important role in all sorts of jobs, and not only the repetitive ones where we put automatons, but also the dangerous situations (like the bomb disposal robots), where robots can get out in front and safeguard human lives.

While the technology is still not there yet–and the robot seems to need quite a bit of instruction and hand waving–you can still get a decent glimpse of what is to come.

Robots with artificial intelligence and natural language processing will be putting out those fires all by themselves…and then some.

Imagine a robot revolution is coming, and what we now call mobile computing is going to take on a whole new meaning with robots on the go–autonomously capturing data, processing it, and acting on it.

I never did see an iPhone or iPad put out a fire, but Octavia and brother Lucas will–and in the not too distant future!

Robots, Coming to An Agency Near You Soon

There is an article today in the Wall Street Journal (10-11 March 2012) about how an Anybot Robot attended a wedding party in Paris dressed up as the man’s 82-year old mother who logged on from her home in Las Vegas and by proxy of the robot moved and even danced around the party floor and conversed with guests–she was the hit of the party.

While sort of humorous, this is also amazingly incredible–through robotics, IT and telecommunications, we are able to close the gap in time and space and “be there,” even from a half a world away.

The QB Anybot robot is life size, rolls around on 2 wheels like a Segway, and has glowing blue eyes and a telescreen for a forehead on a long skinny cylindrical body that can be controlled remotely and costs only $9,700.

While this is the story of a robot “becoming the life of the party,” I believe that we are at the cusp of when robots will be reporting for duty at our agencies and organizations.

The function of robots in workplace has been tested with them performing everything from menial office tasks (like bringing the coffee and donuts) to actually representing people at meetings and around the office floor–not only keeping an electric eye on things so to say, but actually skyping back and forth with the boss, for example.

As robots become more dexterous, autonomous, and with better artificial intelligence, and abilities to communicate with natural language processing, we are going to see an explosion of these in the workplace–whether or not they end up looking like a Swiffer mop or something a little more iRobot-like.

So while we are caught up in deficit-busting times and the calls for everything from “Cloud First” to “Share First” in order to consolidate, save, and shrink, maybe what we also need is a more balanced approach that takes into account not only efficiencies, but effectiveness through innovation in our workplaces–welcome to the party, Robots!

(Source Photo: Andy Blumenthal)

>Conversational Computing and Enterprise Architecture

>

In MIT Technology Review, 19 September 2007, in an article entitled โ€œIntelligent, Chatty Machinesโ€ by Kate Green, the author describes advances in computersโ€™ ability to understand and respond to conversation. No, really.

Conversational computing works by using a โ€œset of algorithms that convert strings of words into concepts and formulate a wordy response.โ€

The software product that enables this is called SILVIA and it works like this: โ€œduring a conversation, words are turned into conceptual dataโ€ฆSILVIA takes these concepts and mixes them with other conceptual data that’s stored in short-term memory (information from the current discussion) or long-term memory (information that has been established through prior training sessions). Then SILVIA transforms the resulting concepts back into human language. Sometimes the software might trigger programs to run on a computer or perform another task required to interact with the outside world. For example, it could save a file, query a search engine, or send an e-mail.โ€

There has been much research done over the years in natural-language processing technology, but the results so far have not fully met expectations. Still, the time will come when we will be talking with our computers, just like on Star Trek, although I donโ€™t know if weโ€™ll be saying quite yet โ€œBeam me up, Scotty.โ€

From an enterrpise architecture standpoint, the vision of conversational artificial intelligence is absolutely incredible. Imagine the potential! This would change the way we do everyday mission and business tasks. Everything would be affected from how we execute and support business functions and processes, and how we use, access, and share information. Just say the word and itโ€™s done! Won’t that be sweet?

I find it marvelous to imagine the day when we can fully engage with our technology on a more human level, such as through conversation. Then we can say goodbye to the keyboard and mouse, the way we did to the typewriter–which are just museum pieces now.