{Saving Us From DC Ground Zero}

dc

One well-placed nuclear suitcase bomb or nuclear ballistic missile strike on DC and say goodbye to virtually the entire hub of the Federal government. 


As of 2014, there are over 4.2 million federal employees (2.7M in the civilian agencies and 1.5M in the military). 


Over 500K are located in the DC, MD, VA tristate area. 


But it’s not just the numbers, it’s that the headquarters of all the major government agencies are located here. 


While, of course, there are backup sites, and emergency doomsday sites like Mount Weather (48 miles from DC), there is no telling how much advance notice, if any we would have, and who would escape and survive a deadly blow to our capital region. 


And it could be a radiological, chemical, biological, or nuclear (RCBN) attack that does us in…whether from Russia, China, Iran, North Korea or other diabolical enemy out there. 


The point is that by concentrating all our federal headquarters and senior leadership and key resources here we are in fact, giving the enemy an easy shot to decapitate the entire country?


While others (like Paul Kupiec in the WSJ) have questioned whether some of the federal agencies can be moved out to other needy cities and communities across the country for economic reasons (to bring jobs and investment) especially those agencies that are actively looking to build new HQS buildings already (e.g. FBI and Department of Labor), to me the far more potent question is one of national security. 


The main advantage of having the crux of the federal government in the DC area is surely one of coordination–the President wants his Cabinet near him and the Cabinet Secretaries want their senior staff near them, and so on and so on. 


So, you get this mass concentration of a who’s who of the federal government in and around Washington, DC. 


But what about the advances of technology? 


Surely, through networks and telecommunications and teleworking, we can support a geographically diverse workforce and do no significant harm to our operating as one.


We’re talking a very big cultural change here!


It’s one thing to have nuclear missiles roaming the seas on attack submarines waiting for orders from Washington, DC and it’s quite another to move the actual government intelligentsia and leadership out from the central hub. 


Let’s face it, in a real crisis situation, with the chaos and panic and transportation overflow and perhaps simultaneous cyberattacks, no one is really going to be going anywhere–especially in a surprise attack. 


If Pearl Harbor (of which we just celebrated the 75th anniversary) and 9/11 teach us anything is that when the sh*t hits the fan, it hits hard and sticks solid. 


Working in the Metro DC area, selfishly, I’d like to say keep the investment, jobs, and great opportunities here.


For the good of the nation and our survival against true existential threats, we’d be much smarter to spread the federal wealth as far and wide across this great nation that we can. 😉


(Source Photo: Andy Blumenthal)

Your Computer Is All Wet

Computer Chip

So I was at my first synagogue men’s club event last week.


A guy at the door was checking people in with a laptop lent by my friend, who is the head of the men’s club.


Sitting at the desk, the check-in guy had a cup of soda and at one point, it got knocked over and spilled on top of the MacAir. 


I raced over with some napkins to try and wipe it off quickly, and my friend grabbed his laptop and held it upside down to try and get the spill out.


For a while, the computer stayed on, but as I feared all the sugary stuff in the soda would do it in so it wouldn’t turn on again. 


I emailed my friend a number of times during the week to find out how his laptop was doing. 


He had made an appointment with AppleCare and they said they could fix it, but he said it would cost almost as much as a new computer. 


Also, they gave him a contact somewhere else that specializes in recovering the data/contents on the computer. 


The saga with the computer isn’t over, but on Shabbat my friend in synagogue said to me, “You know, you were the only one who contacted me to inquire how I was doing with the laptop.”


And he gave me a warm smile that said thank you for actually giving a damn. 


I thought to myself perhaps we only have a few real friends in the world and it’s not just about who gives us that old ada-boy at the fun events. 😉


(Source Photo: Andy Blumenthal)

Green Data Center Cooling

Green Data Center Cooling

I read with great interest this week in BBC about 2 mysterious barges off the East and West coasts of the U.S.

One barge is by San Francisco and the other by Maine.

The 4-story barges belong to Google.

There is speculation about these being, maybe, floating data centers.

I think that is more likely than showrooms for Google Glass.

These barges would potentially avail themselves of the ocean water for cooling the IT equipment.

I would imagine that there could be some backup and recovery strategy here as well associated with their terrestrial data centers.

But how you protect these floating data behemoths is another story.

A white paper by Emerson has data center energy consumption in the 25% range for cooling systems and another 12% for air movement, totaling 37%.

Other interesting new ideas for reducing energy consumption for data center cooling include submersion cooling.

For example, Green Revolution (GR) Cooling is one of the pioneers in this area.

They turn the server rack on its back and the servers are inserted vertically into a dielectric (an electrical insulator–yes, I had to look that up) cooling mineral oil.

In this video, the founder of GR identifies the potential cost-savings including eliminating chillers and raised floors as well as a overall 45% reduction in energy consumption, (although I am not clear how that jives with the 37% energy consumption of cooling to begin with).

Intuitively, one of the trickiest aspect to this would be the maintenance of the equipment, but there is a GR video that shows how to do this as well–and the instructions even states in good jest that the “gloves are optional.”

One of my favorite aspects of submersion cooling aside from the environmental aspects and cost-savings is the very cool green tint in the server racks that looks so alien and futuristic.

Turn down the lights and imagine you are on a ship traveling the universe, or maybe just on the Google ship not that far away. 😉

(Source Photo: Green Revolution)

Safeguarding Our Electrical Grid

Image

Popular Science (28 January 2013) has an interesting article on “How To Save The Electrical Grid.”

Power use has skyrocketed with home appliances, TVs, and computers, causing a significant increase in demand and “pushing electricity through lines that were never intended to handle such high loads.”

Our electrical infrastructure is aging with transformers “now more than 40 years old on average and 70% of transmission lines are at least 25 years old” while at the same time over the last three decades average U.S. household power consumption has tripled!

The result is that the U.S. experiences over 100 mass outages a year to our electrical systems from storms, tornados, wildfires and other disasters.

According to the Congressional Research Service, “cost estimates from storm-related outages to the U.S. economy at between $20 billion and $55 billion annually.”

For example, in Hurricane Sandy 8 millions homes in 21 states lost power, and in Hurricane Irene, a year earlier, 5.5 million homes lost electricity.

The solution is to modernize our electrical grid:

– Replace a linear electrical design with a loop design, so a failure can be rerouted. (Isn’t this basic network architecture where a line network is doomed by a single point of failure, while a ring or mesh topology can handle interruptions at any given point?)

– Install “fault-current limiters” as shock absorbers so when there is a surge in the grid, we can “absorb excess current and send a regulated amount down the line” rather than causing circuit breakers to open and stop the flow of electrical power altogether.

– Create backup power generation for critical infrastructure such as hospitals, fire stations, police, and so on, so that critical services are not interrupted by problems on the larger grid. This can be expanded to installing solar and other renewable energy resources on homes, buildings, etc.

– Replace outdated electrical grid components and install a smart grid and smart meters to “digitally monitor and communicate home power” and automatically adjust power consumption at the location and device level. Smart technology can help manage the load on the grid and shift non-essential use to off-hour use. The estimated cost for modernizing the U.S. grid is $673 billion–but the cost of a single major outages can run into the ten of billions alone. What will it take for this investment to become a national priority?

I would add an additional solution for safeguarding our electrical grid by beefing up all elements of cyber security from intrusion detection and prevention to grid protection, response, and recovery capabilities. Our electrical system is a tempting target for cyber criminal, terrorists or hostile nation states that would seek to deprive us of our ability to power our economy, defense, and political establishments.

While energy independence has become feasible by 2020, we need to make sure that we not only have enough energy resources available, but also the means for reliable and secure energy generation and distribution to every American family and business. 😉

(Source Photo: Andy Blumenthal)

The Internet Lives

While the Internet, with all its information, is constantly changing with updates and new information, what is great to know is that it is being preserved and archived, so present and future generations can “travel back” and see what it looked liked at earlier points in time and have access to the wealth of information contained in it.
This is what the Internet Archive does–this non-profit organization functions as the Library of the Internet. It is building a “permanent access for researchers, historians, scholars, people with disabilities, and the general public to historical collections that exist in digital format.”

In the Internet Archive you will find “texts, audio, moving images, and software as well as archived web pages” going back to 1996 until today.

I tested the Archive’s Wayback Machine with my site The Total CIO and was able to see how it looked like back on October 24, 2010.

It is wonderful to see our digital records being preserved by the Internet Archive, just like our paper records are preserved in archives such as The Library of Congress, which is considered “the world’s most comprehensive record of human creativity and knowledge”), The National Archives, which preserves government and historical records, and The National Security Archive, a research institute and library at The George Washington University that “collects and publishes declassified documents through the Freedom of Information Act…[on] topics pertaining to national security, foreign, intelligence, and economic policies of the United States.”

The Internet Archive is located in San Francisco (and my understanding is that there is a backup site in Egypt).

The Internet Archive is created using spider programs that crawl the publicly available pages of the Internet and then copy and store data, which is indexed 3 dimensionally to allow browsing over multiple periods of times.

The Archive now contains roughly 2 petabytes of information, and is growing by 20 terabytes per month. According to The Archive, the data is stored on hundreds (by my count it should be about 2,000) of slightly modified x86 machines running on Linux O/S with each storing approximately a terabyte of data.

According to the FAQs, it does take some time for web pages to show up–somewhere between 6 months and 2 years, because of the process to index and transfer to long-term storage, and hopefully the process will get faster, but in my opinion, having an organized collection and archiving of the Internet is well worth the wait.

Ultimately, the Internet Archive may someday be (or be part of) the Time Capsule of human knowledge and experience that helps us survive human or natural disaster by providing the means to reconstitute the human race itself.

(Source Photo: here)

Running IT as an Ecosystem

Ecosystem

The New York Times (27 November 2011) has an interesting article under “bright ideas” called Turn on the Server. It’s Cold Outside.

The idea in the age of cloud and distributed computing, where physical location of infrastructure is besides the point, is to place (racks of) servers in people’s homes to warm them from the cold.
The idea is really pretty cool and quite intuitive: Rather than use expensive HVAC systems to cool the environment where servers heat up and are housed, instead we can use the heat-generating servers to warm cold houses and save money and resources on buying and running furnaces to heat them.
While some may criticize this idea on security implications–since the servers need to be secured–I think you can easily counter that such a strategy under the right security conditions (some of which are identified in the article–encrypting the data, alarming the racks, and so on) could actually add a level of security by distributing your infrastructure thereby making it less prone to physical disruption by natural disaster or physical attack.
In fact, the whole movement towards consolidation of data centers, should be reevaluated based on such security implications.  Would you rather have a primary and backup data center that can be taken out by a targeted missile or other attack for example, or more distributed data centers that can more easily recover. In fact, the move to cloud computing with data housed sort of everywhere and anywhere globally offers the possibility of just such protection and is in a sense the polar opposite of data center consolidation–two opposing tracks, currently being pursued simultaneously.
One major drawback to the idea of distributing servers and using them to heat homes–while offering cost-saings in term of HVAC, it would be very expensive in terms of maintaining those servers at all the homes they reside in.
In general, while it’s not practical to house government data servers in people’s homes, we can learn to run our data centers more environmentally friendly way. For example, the article mentions that Europe is using centralized “district heating” whereby more centralized data center heat is distributed by insulated pipes to neighboring homes and businesses, rather than actually locating the servers in the homes.
Of course, if you can’t heat your homes with data servers, there is another option that gets you away from having to cool down all those hot servers, and that is to locate them in places with cooler year-round temperatures and using the areas natural air temperature for climate control. So if you can’t bring the servers to heat the homes, you can at least house them in cold climates to be cooled naturally.  Either way, there is the potential to increase our green footprint and cost-savings.
Running information technology operations with a greater view toward environmental impact and seeing IT in terms of the larger ecosystem that it operates in, necessitates a careful balancing of the mission needs for IT, security, manageability, and recovery as well as potential benefits for greater energy independence, environmental sustainability, and cost savings, and is the type of innovative bigger picture thinking that we can benefit from to break the cycle of inertia and inefficiency that too often confronts us.
(Source Photo: here)

>Cloud Computing, The Next Evolution

>

On November 4-5 2009, I attended a good CSC Leading Edge Forum on Cloud Computing.

The kickoff by W. Brain Arthur was a highlight for me (he is the author of The Nature of Technology). He provided an excellent conceptualization of cloud and it’s place in overall technology advancement and body of world innovation. Essentially, he sees cloud in the 2000’s as the next evolution from the Internet in the 1990s. As such, the cloud is computational power in the “virtual world,” providing a host of benefits including easy access, connectivity, and cost efficiency. He sees the cloud coming out of the initial frenzy and into a industry sort-out that will result in a stable build out.

Another great speaker was David Moschelle from CSC who talked about the myriad benefits of moving to cloud such as scalability, self-service, pay as you go, agility, and ability to assemble and disassemble needed components virtually on the fly. With the cloud, we no longer have to own the computing means of production.

Of course, we also talked about the challenges, particularly security. Another speaker also spoke about the latency issues on the WAN with cloud, which currently limits some usability for transactional processing.

Over the course of the forum numerous examples of success were given including Bechtel achieving a 90% cost advantage by moving storage to the cloud. Others, such as Amazon were able to put up new web sites in 3 weeks versus 4-6 months previously. Also, Educational Testing Service as another example is using cloud bursting, since they tend to run data center at known cyclical peaks.

Others connected cloud with social computing: “the future of business is collaboration, the future of collaboration is in the cloud.”

In terms of the major types of cloud, I thought the relationship between responsibility and control was interesting. For example:

  • Software as a Service — more “freedom” from responsibility for service, but less freedom to change service (i.e. less control)
  • Platform as a Service – (Hybrid)
  • Infrastructure as a Service – less freedom from responsibility for actual end-user services, more freedom to change service provision (i.e. more control)

In all cases, the examples demonstrated that organizations do not have a lot of leeway with SLAs with cloud providers. It’s pretty much a take it or leave it proposition. With liability to the vendor for an outage being limited to basically the cost of the service, not the cost of lost business or organizational downtime. However, it was noted that these mega-vendors providing cloud services probably have a better availability and security than it’s customers could have on their own. In other words, an outage or security breach will either way cost, and where is there a greater chance of this happening?

Sort of a good summary was this: “Leading companies are moving development/test and disaster recovery to the cloud,” but then this will reverse and companies will actually move their production in the cloud and provide mainly a back up and recovery capability in house. This is similar to how we handle energy now, were we get our electricity from the utilities, but have a back-up generator should the lights go dark.

>E-memory and Meat Memory

>

As we move towards a “paperless society” and migrate our data to the computer and the Internet, we can find personal profiles, resumes, photos, videos, emails, documents, presentations, news items, scanned copies of diplomas and awards, contact lists, and even financial, tax, and property records.

People have so much information on the web (and their hard drive) these days that they fear one of two things happening:

  1. Their hard drive will crash and they will lose all their valuable information.
  2. Someone will steal their data and their identity (identity theft)

For each of these, people are taking various precautions to protect themselves such as backing up their data and regularly and carefully checking financial and credit reports.

Despite some risks of putting “too much information” out there, the ease of putting it there, and the convenience of having it there—readily available—is driving us to make the Internet our personal storage device.

One man is taking this to an extreme. According to Wired Magazine (September 2009), Gordon Bell is chronicling his life—warts and all—online. He is documenting his online memory project—MyLifeBits—in a book, called Total Recall.

“Since 2001, Bell has been compulsively scanning, capturing and logging each and every bit of personal data he generates in his daily life. The trove includes Web Sites he’s visited (22,173), photos taken (56,282), docs written and read (18,883), phone conversations had (2,000), photos snapped by SenseCam hanging around his neck (66,000), songs listened to (7,139) and videos taken by (2,164). To collect all this information, he uses a staggering assortment of hardware: desktop scanner, digicam, heart rate monitor, voice recorder, GPS logger, pedometer, Smartphone, e-reader.”

Mr. Bell’s thesis is that “by using e-memory as a surrogate for meat-based memory, we free our minds to engage in more creativity, learning, and innovation.”

Honestly, with all the time that Bell spends capturing and storing his memories, I don’t know how he has any time left over for anything creative or otherwise.

Some may say that Gordon Bell has sort of an obsessive-compulsive disorder (OCD)—you think? Others that he is some sort of genius that is teaching the world to be free and open to remembering—everything!

Personally, I don’t think that I want to remember “everything”. I can dimly remember some embarrassing moments in elementary school and high school that I most sure as heck want to forget. And then there are some nasty people that would be better off buried in the sands of time. Also, some painful times of challenge and loss—that while may be considered growth experiences—are not something that I really want on the tip of my memory or in a file folder on my hard drive or a record in a database.

It’s good to remember. It’s also sometimes good to forget. In my opinion, what we put online should be things that we want or need to remember or access down the road. I for one like to go online every now and then and do some data cleanup (and in fact there are now some programs that will do this automatically). What I thought was worthwhile, meaningful, or important 6 months or a year ago, may not evoke the same feelings today. Sometimes, like with purchases I made way back when, I think to myself, what was I thinking when I did that? And I quickly hit the delete key (wishing I could do the same with those dumb impulse purchases!). Most of the time, I am not sorry that I did delete something old and I am actually happy it is gone. Occasionally, when I delete something by accident, then I start to pull my hair out and run for the backup—hoping that it really worked and the files are still there.

In the end, managing the hard drive takes more work then managing one’s memories, which we have little conscious control over. Between the e-memory and the meat memory, perhaps we can have more of what we need and want to remember and can let go and delete the old and undesired one—and let bygones be bygones.

>Making More Out of Less

>

One thing we all really like to hear about is how we can do more with less. This is especially the case when we have valuable assets that are underutilized or potentially even idle. This is “low hanging fruit” for executives to repurpose and achieve efficiencies for the organization.

In this regard, there was a nifty little article in Federal Computer Week, 15 Jun 2009, called “Double-duty COOP” about how we can take continuity of operations (COOP) failover facilities and use them for much more than just backup and business recovery purposes in the case of emergencies. 

“The time-tested approach is to support an active production facility with a back-up failover site dedicated to COOP and activated only during an emergency. Now organizations can vary that theme”—here are some examples:

Load balancing—“distribute everyday workloads between the two sites.”

Reduced downtime—“avoid scheduled outages” for maintenance, upgrades, patches and so forth.

Cost effective systems development—“one facility runs the main production environment while the other acts as the primary development and testing resource.”

Reduced risk data migration—when moving facilities, rather than physically transporting data and risk some sort of data loss, you can instead mirror the data to the COOP facility and upload the data from there once “the new site is 100 percent operational.”

It’s not that any of these ideas are so innovatively earth shattering, but rather it is their sheer simplicity and intuitiveness that I really like.

COOP is almost the perfect example of resources that can be dual purposed, since they are there “just in case.” While the COOP site must ready for the looming contingency, it can also be used prudently for assisting day-to-day operational needs.

As IT leaders, we must always look for improvements in the effectiveness and efficiency of what we do. There is no resting on our laurels. Whether we can do more with less, or more with more, either way we are going to advance the organization and keep driving it to the next level of optimization. 

>Decentralization, Technology, and Anti-Terror Planning

>Given that 9/11 represented an attack on geographically concentrated seats of U.S. financial and government power, is it a good enterprise architecture decision to centralize many or all government headquarters in one single geographic area?

Read about Decentralization, Technology, and Anti-Terror Planning in The Total CIO.