Dexterous Drones


Ok, after the da Vinci System that uses robotics to conduct surgeries this many not seem like such a feat, but think again.



While da Vinci is fully controlled by the surgeon, this Drone from Drexel University that can turn valves, or door knobs and other controls, is on the road to doing this autonomously. 



Think of robots that can manipulate the environment around them not on a stationary assembly line or doing repetitive tasks, but actually interacting real-time to open/close, turn things on/off, adjust control settings, pick things up/move them, eventually even sit at a computer or with other people–like you or I–and interface with them. 



Drones and robots will be doing a lot more than surveillance and assembly line work–with artifical intelligence and machine learning, they will be doing what we do–or close enough. 😉

Display It Everywhere


We are getting closer to the day when mobile computing will truly be just a computer interaction anywhere–on any surface or even on no surface.
In this video we see the OmniTouch, developed by Microsoft in conjunction with Carnegie Mellon University, display your computer interface on everyday objects–yourself, a table, wall, and so on.
This takes the Kinect gaming technology to a whole new level in that the OmniTouch doesn’t just detect and sense your motions and gestures, but it meshes it with the way people typically interact with computers.
Using a wearable pico projector and a depth camera,the OmniTouch creates a human-computer interface with full QWERTY keyboard and touch pan, zoom, scroll capabilities.
It’s amazing to see the person demonstrating theinteraction with the computer practically in thin air–oh boy, Minority Report here we come. 😉
Of course, to become a viable consumer solution, theshoulder-mounted contraption has got to go really small–no bigger than a quarter maybe, and able to be mounted, with processors and connectivity, unobtrusively in clothing, furniture, or right into building construction in your home or office.
At that point, and it hurts to say it given how much I love my iPhone, computers will no longer be device-driven, but rather the application takes center stage.

And the ability to project and click anywhere, anytime helps us reach a new level of mobility and convenience that almost boggles the senses.

Human Evolution, Right Before Our Eyes

Watching how this toddler interacts with an iPad and is then frustrated by plain-old magazines is comical, but also a poignant commentary on our times.
Media that doesn’t move, drill down, pop up, connect us, and otherwise interact with the end-user is seen here as frustrating and dated.
This speaks volumes about where our children and grandchildren are headed with technology adoption and then hopefully “taking it to the next level” and the next!
At the same time, this obviously does not bode well for the legacy paper and magazine publishing industry.
It can be difficult to see things changing so dramatically before our very eyes, but with every doors that closes, there us another one that opens.
And so with technology and with life itself, “to everything there is a time and a purpose under the heaven.”

Computer, Read This

Predicting_crime

In 2002, Tom Cruise waved his arms in swooping fashion to control his Pre-Crime fighting computer in Minority Report , and this was the tip of the iceberg when it comes to consumer interest in moving beyond the traditional keyboard, trackpads, and mice to control our technology. 

For example, there is the Ninetendo Wii and Microsoft Kinect in the gaming arena, where we control the technology with our physical motions rather than hand-held devices. And consumers seem to really like have a controller-free gaming system. The Kinect sold so quickly–at the rate of roughly 133,000 per day during the first three months–it earned the Guinness World Record for fastest selling consumer device. (Mashable, 9 March 2011),

Interacting with technology in varied and natural ways–outside the box–is not limited to just gestures, there are many more such as voice recognition, haptics, eye movements, telepathy, and more.

Gesture-driven–This is referred to as “spatial operating environments”–where cameras and sensors read our gestures and translate them into computer commands. Companies like Oblong Industries are developing a universal gesture-based language, so that we can communicate across computing platforms–“where you can walk up to any screen, anywhere in the world, gesture to it, and take control.” (Popular Science, August 2011)

Voice recognition–This is perhaps the most mature of the alternative technology control interfaces,and products like Dragon Naturally Speaking have become not only standard on many desktops, but also are embedded in many smartphones giving you the ability to do dictation, voice to text messaging, etc.

Haptics–This includes touchscreens with tactile sensations.For example, Tactus Technology is “developing keyboards and game controllers knobs [that actually] grow out of touchscreens as needed and then fade away,” and another company Senseg is making technology that produces feelings so users can feel vibrations, clicks, and textures and can use these for enhanced touchscreens control of their computers. (BusinessWeek, 20-26 June 2011)

Eye-tracking–For example, new Lenovo computers are using eye-tracking software by Tobii to control the browser and desktop applicationsincluding email and documents (CNET, 1 March 2011)

Telepathy–Tiny implantable chips to the brain, “the telepathy chip,” are being used to sense electrical activity in the nerve cells and thereby “control a cursor on a computer screen, operate electronic gadgets [e.g. television, light switch, etc.], or steer an electronic wheelchair.” (UK DailyMail, 3 Sept. 2009)

Clearly, consumers are not content to type away at keyboards and roll their mice…they want to interact with technology the way they do with other people.

It still seems a little way off for computers to understand us the way we really are and communicate.  For example, can a computer read non-verbal cues, which communication experts say is actually something like 70% of our communications?  Obviously, this hasn’t happened yet. But when the computer can read what I am really trying to say in all the ways that I am saying it, we will definitely have a much more interesting conversation going on.

(Source Photo: here)

>Holograms – Projecting Soon

>

I think Holograms are the next big thing.

This example of a hologram on an an iPhone App is pretty amazing as an early version of what is to come.

Just wait for hologram phone calls and meetings and integration with everything social media.

I see things like avatars–graphical representations of users– as a precursor to actual 3-D projected images of ourselves that will be sitting in the classroom, going to the office, and even interacting socially like going on dates with our favorite other.

This is going to make things like Skype, Facetime, and Telepresence just baby steps in our ability to project ourselves across space and time and “be there” in ever more realistic ways participating and interacting with others.

As part of a training class a number of year ago already, I had the opportunity to see a spatial hologram that was very cool. So holograms are not limited to only people but entire environments that can be virtualized and this gives us the opportunity to test new ways of behaving and model and simulate new worlds.

This iPhone App is just a teaser of what is coming.

>Man to Machine–How Far Will It Go?

>

This is an amazing video of the new FemBots from Japan. These robots are incredibly lifelike for nascent androids. With or without the background music, it evokes an eerie feeling.

The vision of iRobot and elements of Star-Trek (remember the character “Data”) are becoming a reality in front of our very eyes.
This is a convergence of humanity and technology, as scary as that sounds. (No not our hearts and souls, but definitely recognizable physical dimensions).

No longer are we talking about simple human-computer interfaces, computer ergonomics, or user-centric architecture design, but rather, we are now moving toward the actual technology with emerging human semblance, charateristics, even some notional speech and affect, etc.

I came across this video the same day today that I saw on FOX news a breakthrough in robotic limbs for people. A man had actually been fitted and was using a robotic hand that responded to his muscle movement. Obviously, this offers huge possibilities for people with disabilities.

Man to machine and machine to man. How far will it go?

>Conversational Computing and Enterprise Architecture

>

In MIT Technology Review, 19 September 2007, in an article entitled “Intelligent, Chatty Machines” by Kate Green, the author describes advances in computers’ ability to understand and respond to conversation. No, really.

Conversational computing works by using a “set of algorithms that convert strings of words into concepts and formulate a wordy response.”

The software product that enables this is called SILVIA and it works like this: “during a conversation, words are turned into conceptual data…SILVIA takes these concepts and mixes them with other conceptual data that’s stored in short-term memory (information from the current discussion) or long-term memory (information that has been established through prior training sessions). Then SILVIA transforms the resulting concepts back into human language. Sometimes the software might trigger programs to run on a computer or perform another task required to interact with the outside world. For example, it could save a file, query a search engine, or send an e-mail.”

There has been much research done over the years in natural-language processing technology, but the results so far have not fully met expectations. Still, the time will come when we will be talking with our computers, just like on Star Trek, although I don’t know if we’ll be saying quite yet “Beam me up, Scotty.”

From an enterrpise architecture standpoint, the vision of conversational artificial intelligence is absolutely incredible. Imagine the potential! This would change the way we do everyday mission and business tasks. Everything would be affected from how we execute and support business functions and processes, and how we use, access, and share information. Just say the word and it’s done! Won’t that be sweet?

I find it marvelous to imagine the day when we can fully engage with our technology on a more human level, such as through conversation. Then we can say goodbye to the keyboard and mouse, the way we did to the typewriter–which are just museum pieces now.