Face Vase

Thought this was a pretty cool Face Vase. 


If you add a digital home assistant to this (like Amazon Echo or Google Assistant) and make the lips move on this vase, it would be quite the futuristic home assistant!


I don’t think I’d feel comfortable living in my own house anymore. 😉


(Source Photo: Andy Blumenthal)

Simplify Me

Technology Complexity.jpeg

So here’s the monitor in the “modern” and beautiful Fort Lauderdale International airport. 


Can you see the number of electrical plugs, wires, connections, input/output ports, etc. on this device?


Obviously, it is comical and a farce as we near the end of 2015. 


Think about the complexity in building this monitor…in connecting it…in keeping it operational.


Yes, we are moving more and more to cellular and wireless communications, to miniaturization, to simple and intuitive user interfaces, to paperless processing, to voice recognition, to natural language processing, and to artificial intelligence.


But we are not there yet.


And we need to continue to make major strides to simplify the complexity of today’s technology. 


– Every technology device should be fully useful and usable by every user on first contact. 


– Every device should learn upon interacting with us and get better and better with time. 


– Every device should have basic diagnostic and self-healing capability. 


Any instructions that are necessary should be provided by the device itself–such as the device telling you step by step what to do to accomplish the task at hand–no manual, no Google instructions, no Siri questions…just you and the device interacting as one. 


User friendly isn’t enough anymore…it should be completely user-centric, period. 


Someday…in 2016 or beyond, we will get there, please G-d. 😉


(Source Photo: Andy Blumenthal)

Robot Firefighters To The Rescue

Meet Octavia, a new firefighting robot from the Navy’s Laboratory for Autonomous Systems Research (LASR) in Washington, D.C.Octavia and her brother Lucas are the the latest in firefighting technology.These robots can hear commands, see through infrared cameras, identify patterns, and algorithmically make decisions on diverse information sets.While the current prototypes move around like a Segway, future versions will be able to climb ladders and get around naval vessels.It is pretty cool seeing this robot spray flame retardant to douse the fire, and you can imagine similar type robots shooting guns on the front line at our enemies.

Robots are going to play an increasingly important role in all sorts of jobs, and not only the repetitive ones where we put automatons, but also the dangerous situations (like the bomb disposal robots), where robots can get out in front and safeguard human lives.

While the technology is still not there yet–and the robot seems to need quite a bit of instruction and hand waving–you can still get a decent glimpse of what is to come.

Robots with artificial intelligence and natural language processing will be putting out those fires all by themselves…and then some.

Imagine a robot revolution is coming, and what we now call mobile computing is going to take on a whole new meaning with robots on the go–autonomously capturing data, processing it, and acting on it.

I never did see an iPhone or iPad put out a fire, but Octavia and brother Lucas will–and in the not too distant future!

Computer, Read This

Predicting_crime

In 2002, Tom Cruise waved his arms in swooping fashion to control his Pre-Crime fighting computer in Minority Report , and this was the tip of the iceberg when it comes to consumer interest in moving beyond the traditional keyboard, trackpads, and mice to control our technology. 

For example, there is the Ninetendo Wii and Microsoft Kinect in the gaming arena, where we control the technology with our physical motions rather than hand-held devices. And consumers seem to really like have a controller-free gaming system. The Kinect sold so quickly–at the rate of roughly 133,000 per day during the first three months–it earned the Guinness World Record for fastest selling consumer device. (Mashable, 9 March 2011),

Interacting with technology in varied and natural ways–outside the box–is not limited to just gestures, there are many more such as voice recognition, haptics, eye movements, telepathy, and more.

Gesture-driven–This is referred to as “spatial operating environments”–where cameras and sensors read our gestures and translate them into computer commands. Companies like Oblong Industries are developing a universal gesture-based language, so that we can communicate across computing platforms–“where you can walk up to any screen, anywhere in the world, gesture to it, and take control.” (Popular Science, August 2011)

Voice recognition–This is perhaps the most mature of the alternative technology control interfaces,and products like Dragon Naturally Speaking have become not only standard on many desktops, but also are embedded in many smartphones giving you the ability to do dictation, voice to text messaging, etc.

Haptics–This includes touchscreens with tactile sensations.For example, Tactus Technology is “developing keyboards and game controllers knobs [that actually] grow out of touchscreens as needed and then fade away,” and another company Senseg is making technology that produces feelings so users can feel vibrations, clicks, and textures and can use these for enhanced touchscreens control of their computers. (BusinessWeek, 20-26 June 2011)

Eye-tracking–For example, new Lenovo computers are using eye-tracking software by Tobii to control the browser and desktop applicationsincluding email and documents (CNET, 1 March 2011)

Telepathy–Tiny implantable chips to the brain, “the telepathy chip,” are being used to sense electrical activity in the nerve cells and thereby “control a cursor on a computer screen, operate electronic gadgets [e.g. television, light switch, etc.], or steer an electronic wheelchair.” (UK DailyMail, 3 Sept. 2009)

Clearly, consumers are not content to type away at keyboards and roll their mice…they want to interact with technology the way they do with other people.

It still seems a little way off for computers to understand us the way we really are and communicate.  For example, can a computer read non-verbal cues, which communication experts say is actually something like 70% of our communications?  Obviously, this hasn’t happened yet. But when the computer can read what I am really trying to say in all the ways that I am saying it, we will definitely have a much more interesting conversation going on.

(Source Photo: here)

>The Printer’s Dilemma

>

There is a lot of interest these days in managed print solutions (MPS)—sharing printers and managing these centrally—for many reasons.

Some of the benefits are: higher printer use rates; reduction in printing; cost saving; and various environmental benefits.

Government Computer News (5 April 2010) has an article called “Printing Money” that states: managed printing is an obvious but overlooked way to cut costs, improve efficiency, and bolster security.”

But there are also a number of questions to consider:

What’s the business model? Why are “printing companies” telling us to buy less printers and to print less? Do car companies tell us to buy less cars and drive less (maybe drive more fuel efficient vehicles, but drive less or buy less?) or do food companies advise us to buy less food or eat less (maybe eat healthier food, but less food)? To some vendors, the business model is simple, if we use their printers and cartridges—rather than a competitor’s—then even if we use less overall, the managed print vendor is getting more business, so for them, the business model makes sense.

What’s the cost model? Analysts claim agencies by moving to managed print solutions “could save at least 25 percent of their printing expenses” and vendors claim hundreds of thousands, if not millions in savings, and that is attractive. However, the cost of commodity printers, even the multifunction ones with fax/copy/scan functions, has come way down, and so has the print cartridges—although they are still too high priced—and we change them not all that often (I just changed one and I can barely remember the last time that I did). As an offset to cost savings, do we need to consider the potential impact to productivity and effectiveness as well as morale—even if the latter is just the “annoyance factor”?

What’s the consumer market doing? When we look at the consumer market, which has in many analyst and consumer opinions jumped ahead of where we are technologically in the office environment, most people have a printer sitting right next to them in their home office—don’t you? I’d venture to say that many people even have separate printers for other family members with their own computers set ups, because cost and convenience (functional)-wise, it just makes sense.

What’s the cultural/technological trend? Culturally and technologically, we are in the “information age,” most people in this country are “information workers,” and we are a fast-paced (and what’s becoming a faster and faster-paced) society where things like turn around time and convenience (e.g. “Just In Time inventory, overnight delivery, microwave dinners, etc.) are really important. Moreover, I ask myself is Generation Y, that is texting and Tweeting and Facebooking—here, there, and everywhere—going to be moving toward giving up there printers or in fact, wanting to print from wherever they are (using the cloud or other services) and get to their documents and information immediately?

What’s the security impact? Understanding that printing to central printers is secure especially with access cards or pin numbers to get your print jobs, I ask whether in an age, where security and privacy of information (including corporate theft and identity theft) are huge issues, does having a printer close by make sense, especially when dealing with sensitive information like corporate strategy or “trade secrets,” mission security, personnel issues, or acquisition sensitive matters, and so on. Additionally, we can we still achieve the other security benefits of MPS—managing (securing, patching etc.) and monitoring printers and print jobs in a more decentralized model through the same or similar network management functions that we use for our other end user-devices (computers, servers, storage, etc.)

What’s the environmental impact? There are lots of statistics about the carbon footprint from printing—and most I believe is from the paper, not the printers. So perhaps we can print smarter, not only with reducing printers, but also with ongoing education and sensitivity to our environment and the needs of future generations. It goes without saying, that we can and should cut down (significantly) on what and how much we print (and drive, and generally consume, etc.) in a resource constrained environment—planet Earth.

In the end, there are a lot of considerations in moving to managed print solutions and certainly, there is a valid and compelling case to moving to MPS, especially in terms of the potential cost-saving to the organization (and this is particularly important in tough economic environments, like now), but we should also weight others considerations, such as productivity offsets, cultural and technological trends, and overall security and environmental impacts, and come up with what’s best for our organizations.

>The Humanization of Computers

>

The Wall Street Journal recently reviewed (Sept. 10, 2010) “The Man Who Lied to His Laptop,” by Clifford Nass.

The book examines human-computer interactions in order to “teach us about human relationships.”

The reviewer, David Robinson, sums up with a question about computers (and relationships): “do we really think it’s just a machine?”

Answer: “A new field of research says no. The CASA paradigm-short for ‘computers as social actors’—takes its starting point the observation that although we deny that we interact with a computer as we would with a human being, many of us actually do.”

The book review sums up human-computer interaction, as follows:

Our brains can’t fundamentally distinguish between interacting with people and interacting with devices. We will ‘protect’ a computer’s feelings, feel flattered by a brown-nosing piece of software, and even do favors for technology that has been “nice” to us. All without even realizing it.”

Some interesting examples of how we treat computers like people:

Having a heart for your computer: People in studies giving feedback on computer software have shown themselves to “be afraid to offend the machine” if they are using their own computers for the evaluation rather than a separate ‘evaluation computer.’

Sexualizing your computer: People sexualize computer voices lauding a male sounding tutor voice as better at teaching ‘technical subjects,’ and a female sounding voice as better at teaching ‘love and relationship’ material.

A little empathy from your computer goes a long way: People are more forthcoming in typing messages about their own mistakes “if the computer first ‘apologizes’ for crashing so often.”

It seems to me that attributing human attributes (feelings, sexuality, and camaraderie) to an inanimate object like a computer is a social ill that we should all be concerned about.

Sure, we all spend a lot of time going back and forth between our physical realities, virtual realities, and now augmented realities, but in the process we seem to be losing perspective of what is real and what is not.

Perhaps to too many people, their computers have become their best friends, closest allies, and likely the biggest time hog of everything they do. They are:

Doing their work at arms length from computers rather than seriously working together with other people to solve large and complex problems facing us all.

Interacting virtually on social networks rather than with friends in real life, and similarly gaming online rather than meeting at the ballpark for some swings at the bat.

Blogging and tweeting their thoughts and feelings on their keyboards and screens, rather than with loved ones who care and really want to share.

We have taken shelter behind our computers and to some extent are in love with our computers—both of these are hugely problematic. Computers are tools and not hideaways or surrogate lovers!

Of course, the risk of treating computers as people is that we in turn treat people as inanimate computers—or maybe we already have?

This is a dangerous game of mistaken reality we are playing.

[Photo Source: http://www.wilsoninfo.com/computerclipart.shtml]

>A Boss that Looks Like a Vacuum Cleaner

>

This is too much…an article and picture in MIT Technology Review (September/October 2010) of a robotic boss, called Anybot—but this boss looks like a vacuum cleaner, costs $15,000, and is controlled remotely from a keyboard by your manager.

So much for the personal touch—does this count toward getting some face time with your superiors in the office?

With a robotic boss rolling up to check on employees, I guess we can forget about the chit-chat, going out for a Starbucks together, or seriously working through the issues. Unless of course, you can see yourself looking into the “eyes” of the vacuum cleaner and getting some meaningful dialogue going.

This is an example of technology divorced from the human reality and going in absolutely the wrong direction!