Subscribe to the IT Asset Monitor

Receive Blog Updates via Email

Your email:

Follow Me

The Data Driven Data Center

Current Articles | RSS Feed RSS Feed

Without Power To Its Data Centers, Hong Kong Will Fail

 

It places number four on IMD’s annual world competitiveness report, has a population of over 7 million, is the most high-rise dense city globally and defines APAC commerciality across the world.

The continued global recovery is dependent on Hong Kong. It controls the global-recession busting growth figures evident in the region. Its sustainability as a financial world player relies entirely on its data centers. They are its lifeblood, but also one of its most significant challenges.

The power demands of a world center of finance are massively expensive. Efficient power generation and distribution against the backdrop of rising energy costs has put Hong Kong and mainland China’s energy corporations in the spotlight.

Globally people are consuming more energy than ever before. The US Energy Information Administration expects global use to rise by an astronomical 56% by the year 2040. That’s the equivalent of 4,000 power plants.

Identifying efficiencies can transform the bottom line of an energy company instantly and significantly. Inefficient systems and processes already add millions of dollars of needless expense to budgets. This cannot continue.

As power becomes scarcer, more expensive and transforms into a near-priceless commodity, citizens will expect governments to compete on a macro level and accommodate their energy requirements. Energy sustainability will quickly be an issue affecting everyone. 

The Stirred Dragon

Hong Kong Central Business District smThese challenges are already evident in Hong Kong and China. The dramatic growth of power use within Hong Kong is a direct result of emergent data technologies. Businesses and the consumer landscape have pushed the city, known for its technological leadership, towards an always-on culture severely reliant on power and data infrastructure. The 24/7 financial demands of the city place further pressure on its energy companies.

Data center growth in Asia as a whole is exploding as more than half the world’s population (estimated at 4.4 billion) catches up with the West with technology, digital culture and general infrastructure improvements.

Equinix’s capital expenditure in Hong Kong reached $150 million this month with the expansion of its latest Tsuen Wan facility. SoftLayer, IBM’s infrastructure arm, pledged $1.2 billion for facilities in key financial centers, including Hong Kong. China Unicom has invested $3 billion into an integrated telecommunications facility in the region to support its soaring user base.

Analyst figures mirror this. Frost & Sullivan values the Hong Kong data center market at $802 million by 2019. These are huge numbers that need to be paid attention to.

Power Driving Power

With statistics like this, it is crucial that energy companies have resilient, optimized infrastructures that support their complex operational requirements.

Scaling power in real-time and retaining enough capacity to ensure services, is not just mission-critical for Hong Kong’s energy companies, but critical on a global scale.

Behind the intelligent generation of power is IT and the data centers of power companies. As energy distributors look to enhance and drive greater efficiency across their operations, they require agile IT and data center infrastructure to support their objectives.

Efficient urban energy management is mirrored by efficient energy control within the data center.

Implementing visibility measures and identifying immediate thermal and power-related concerns – as well as predicting future trends - allows management to maximize budgets and guarantee operational redundancy. Disaster avoidance is essential in the context of power distribution. Power cannot fail.

Costs Against Distribution

The age of cheap power in Asia is ending. Shanghai Grid announced a 10% increase across its network last year. PetroChina is raising its oil prices drastically. Power companies, if they are to remain competitive for consumers, have to deliver commercial improvements elsewhere in their operations. The data center offers the perfect opportunity.

As Zhu Hua, Tencent’s Senior Data Center Architect and member of the China Data Center Committee highlighted recently, the vast cost savings possible are obvious. “For instance, for a data center with 100,000 servers, if electricity prices drop by a PUE of 0.1, it will help the data center save 12m CYN (almost $2 million) a year.”

One of Hong Kong’s leading power companies recognized similar potential efficiency improvements in its own data center and with the help of RF Code optimized its operations to cut its current power requirements.

This involved bringing it in line with globally recognized ASHRAE guidelines, an environmental monitoring solution that took a single day to deploy and the rectification of thermal complications that would have, if left unaddressed, caused the facility to fail.

Read about how the company achieved this in our new case study and follow RF Code on Twitter and LinkedIn for updates on the Asian data center market and rising global energy crisis.

New Call-to-action  

Wearable Technology's Big Data Could Wear Out Your Data Center

 

The first generation of Google Wear technology was released last week, bringing with it a new era of interaction, connectedness and in the long run, potential strain on the world’s data centers.

As smartphones have become ubiquitous and lost their excitement factor, phone manufacturers are searching for new ways to increase their sales bases. As consumers claw for a simpler, more natural user experience, wearable technology presents one of the largest opportunities.

It is clear that strapping technology to our bodies is more than a fad, a major technology market with the power to change how we interact with the world.

Wearing With Pride

Last year’s Juniper Research report estimated that by 2018, $19B will be spent on devices that interact with our phones. This estimation is certainly mirrored by Google’s desire to integrate Android into everything; Google Wear, Google Glass, NEST.

Early devices like Pebble have shown there is a desire for wearable technology, but the real growth at this early stage – and threat to the data center – looks to be through specialist wearable technology which answers specific questions on people’s wellbeing.

Health sensors have exploded in popularity the last 18 months. There is no indication that demand will slow. Mirroring the above Juniper figures, a similar report by MarketsandMarkets put the consumer healthcare sensor at nearly $50B by 2020.

Products like the Fitbit activity tracker and Jawbone Up definitely suggest early market estimations are on-trend. Over 3 million Fitbits were sold in the US last year. Jawbone acquired BodyMedia last year for $100 million. These are big numbers for a fledgling market.

Connecting Your Body

describe the imageAs the technology itself continues to improve and the stigma of wearing technology subsides, growth will continue, and with it, the continuous flow of data being sent to data centers.

Product advancements have already increased the volume of data being generated by wearable devices. Early-stage products used to require manual syncing, but now everything is constant and automated. Data from millions of sensors will keep flooding into data centers in increasing amounts.

The development of Bluetooth Low Energy standards will only add to the vast amount of data piling into the business-critical facilities of the world’s largest data-driven companies.

The list of Bluetooth Smart Ready products (which include scales, proximity sensors, heart rate monitors, breathalyzers, blood pressure monitors, pedometers and other metrics) is growing daily as consumers become more inclined to monitor their health analytically. Data is no longer something confined to IT, but a core part of how many of us choose to live our lives.

Behind The Scenes

Advancements in the battery technology that supports mobility will further open up a world where people no longer reach for their pockets, but sensors reach out to owners.

This expectation of an always-on, ever-connected life brings with it a specific set of data-driven financial and IT challenges. Wearable technology will become a critical issue for those managing the infrastructure powering this new world.

The Data Center Journal already noted six months ago that a wearable reality has already arrived. Worryingly for companies moving into the market, its growth has the capability to damage data center sustainability, reliability and service levels.

Organisations are only just beginning to grasp the data challenges led by mobility, smart devices and mobility, so the prospect of another two disruptive markets on the horizon - wearable technology and the Internet of Things – poses significant risks to those behind the commercial-dependent data infrastructure that supports everything.

In a future blog on the topic, we’ll address what specific IT and financial challenges accompany wearable technology and the methods for overcoming them with an automated, intelligent data center.

Download

Data Center Optimization: Invasion of the Data Snatchers

 

Do you think that the data center from which Target’s stolen consumer data was compromised has been a lot like a scene out of RF Code Theater Production’s “Supernatural Info-tivity” lately?  With 110 million victims in the November data breach (70M higher than the originally reported 40M), the Target data breach ushered in the age of the Big Data Theft to public consciousness and resulted in major losses for the company. Follow-up reports that Michaels and Neiman Marcus suffered similar attacks only further echoed that your data is everywhere and each time one of us swipes a payment card, we could potentially be sending our most vital details straight to a scary situation. 

This month, details have been released that parallel the breach even closer with the typical horror movie. Just as we’ve seen in thousands of B-level scary movies when teenagers are told not to drive down that road or a girl is warned not to open that box, it was revealed that Target was warned. Two months prior to the huge data breach, IT security staff warned Target of potential vulnerabilities within their card payment system, and we wouldn’t be watching a horror movie if Target had heeded their advice. 

True, there may not have been any gruesome deaths at the hands of a psychopath or demonic possession. Instead, the frights occurred when Target purchased credit protection plans for all 110 million victims, were struck with multiple law suits from financial institutions, stock prices fell to a 20-month low, and ultimately reported lower-than-expected 2013 Q4 results. With analysts predicting that the entire ordeal may cost the big box retailer more than $1 billion in fees, many on Wall Street and in Target’s boardrooms would describe the incident as…well… a bloodbath. 

The Target situation goes far beyond strictly data center security, as malware was even installed on point-of-sale registers. These registers are all connected to remote management software and “protected” with what are now being reported as weak passwords. The industry’s defense has been to rally for more information about cyber threats, through the Retail Industry Leader Association’s Cybersecurity and Data Privacy Initiative and to encourage chip-and-PIN cards to replace the existing technology. Already the standard in Europe, these cards are supposedly more secure because they are difficult to counterfeit and provide encrypted information on-site. However, it’s worth noting that the switch to chip-and-PIN cards would be enormous to financial institutions (some already carrying the costly burden of compromised consumer information following the Target data breach) and that Target’s data thieves successfully stole encrypted PIN information.  

With the potential for even more damaging breaches occurring as the role of data in our daily lives increases, RILA’s initiative as well as the exploration of more secure card technology is definitely a step in the right direction, but I can’t help to wish that more attention was being called to actual data center infrastructure management improvements. After all, warnings can be ignored, weapons and defenses can be used against you, and it’s usually the unlocked door or the flat tire that lets the bad guy sneak in the back door.

 

Download

RF Code’s M174 Asset Tags Ensure Security, ROI, and Ongoing Savings

 

While deploying an automated asset management solution from RF Code can be an incredibly beneficial way to simplify audits, guarantee audit accuracy, ensure compliance and lower operational costs in your data center, there is obviously significant up-front cost associated with rolling out the asset tags, readers, and software necessary to generate and collate all of the asset location and lifecycle data.  While RF Code customers like IBM and CME Group have reported rapid return on their asset management solution rollouts (both reporting ROI’s of less than 12 months, with ongoing savings thereafter), it’s just good fiscal sense to take a hard look at any potential ITAM solution and see where the biggest cost drivers are in deploying the solution, and what can be done to maximize both the rapid return on the initial deployment costs as well as prolonging the lifespan of the deployed solution without incurring additional costs.

In deploying a solution from RF Code, typically the total cost of the asset tags is the largest part of the total purchase cost.  After all, each RF Code reader can read beacon data from thousands of RF Code asset tags and sensors within range (a single RF Code reader covers around 2500-5000 square feet depending on the characteristics of your data center or facility, more in open-air environments). So the total number of readers deployed in any given scenario will typically be fairly low, especially when compared with the aggregate cost of affixing our active RFID asset tags to each of your enterprise assets.  Therefore, in order to achieve the best possible value it’s imperative that these asset tags perform reliably to deliver up front savings as quickly as possible and that they continue to deliver savings over the entire lifespan of the assets to which they’re affixed.

Here at RF Code, we took a lot of these concerns into account when we designed our asset management solutions.  To ensure savings over the entire lifecycle of an enterprise asset (typically 3-5 years) we designed our asset tags to have a 5+ year lifespan, allowing an asset tag to be affixed to an asset as soon as it is purchased and deployed and ensuring that the asset is reliably tracked and monitored right up until it is retired at end-of-life. And the industrial-grade adhesive we use for affixing tags to assets made it nearly impossible for tags to accidentally be knocked loose or removed from assets, ensuring that they deliver value across the entire lifecycle of both the tag and the asset itself.

M174 Asset TagsBut in talking with our customers, some concerns came to light:

  • Because the asset tag is more-or-less permanently affixed to the asset, what if I need to retire an asset earlier than expected?  Is there any way I can re-use the asset tag rather than just throwing it away?

  • What about malfeasance?  If you’re actually reading signals sent by the asset tags, can’t someone just pull the tag off and then steal the asset without anyone knowing there’s a problem?

Our M174 asset tags directly addresses these issues with our replaceable installation tabs.  These inexpensive installation tabs -– available in form factors designed for simple flag, loop, or thumb-screw installations to ensure they can be used on a broad variety of equipment – can be easily removed and replaced from the M174 tag housing itself.  Therefore, if you need to retire an asset, but the asset tag affixed to it is still functional, you can simply cut the installation tab, replace it with a new one, and the re-deploy the asset tag for use with another asset. In this way you are guaranteed that each asset tag you purchase will be used for its entire lifespan, delivering savings and asset visibility all the while.

Removing an installation tab from an M174 asset tag.

But what about ensuring asset security? Can’t people just cut these tabs or yank the asset tag off an asset and run? That’s where our new tamper tabs, also designed exclusively for use with the M174 asset tag, come in.  RF Code tamper tabs – also available in flag, loop, and thumbscrew designs -- include a carbon fiber filament that is embedded in the adhesive of the tab.  Once the tamper tab is connected to an M174 tag, the carbon fiber filament completes a tamper detection circuit.  If this circuit is subsequently broken the tag will immediately begin broadcasting a tamper alert status.  Once the tamper tab is applied to an asset the carbon fiber will be torn if the tag is cut, if the tag adhesive is peeled away from the asset or the tab itself, of it the M174 tag is pulled off the tamper tab by force. And because it takes over 8 pounds of force to pull the tag off of the tab, normal movement of assets will not break the circuit or trigger tamper alerts.

Want to learn more about RF Code’s new M174 asset tag?  Read technical information about the tag here, or watch Chris Gaskins discuss the M174 tag and our new tamper tabs. And if you would like to discuss the ways in which these innovative solutions ensure efficiency and contribute to the bootom line comment below or contact us: we’d love to hear from you.

Download

DCIM Solutions: It's 10:00PM - Do You Know Where Your Servers Are?

 

Guest Blog by William Bloomstein, Market Strategist for iTRACS

Data Center Infrastructure Management (DCIM) means many things to many people, depending on their short-term and long-range goals for the data center. For some, the immediate priority is reducing energy consumption and costs, contributing to the elimination of OPEX. For others, it's getting a better handle on space so they can optimize every square foot of their existing footprint, an important part of their strategy for reducing CAPEX. For still others, it's getting control over their IT asset inventory –­ what they have and where it is on the floor. Here, data center owners and operators around the world agree on a core tenant of IT asset management:

Any DCIM solution worth its salt has to have a strong asset tracking and location capability that can find, identify, and confirm the location of every asset on the floor.

This is where the DCIM partnership of iTRACS and RF Code shines.

You can't manage what you can't find.

It may not be as bad as worrying about the whereabouts of a teenager at night, but tracking down your servers and other IT assets is vital for data centers seeking firm control over their asset portfolio. It reduces time spent on asset management by your IT staff, accelerates audits, helps with lease management, and streamlines tech refresh and other data center initiatives.

So let me ask you this:

iTRACS and RF Code - the dream teamDo you know where all of your servers are located, what they are doing, and why they are there?  And HOW do you know that you, indeed, know?

If you can't answer both of these questions with confidence, you need a joint DCIM solution from iTRACS and RF Code. Let me explain ...

There are two types of assets on the floor that commonly go undetected and/or unauthorized, jeopardizing the efficiency and well-being of your operation:

  1. Misplaced Assets. These are older "forgotten" assets sitting in your racks without your knowledge or control (out of your line of sight). You don't even know they are still hanging around. Perhaps the Line of Business removed its applications and left the physical servers to sit there uselessly, forgotten. Or the boxes' leases expired but no one did anything about it. In any event, these assets are the "lost children" of the IT portfolio. And if you don't asset-tag them, they may sit on your racks, unseen, needlessly consuming power and space, forever.
  2. Misinstalled Assets. These are assets that are installed in the wrong location but you may not know it until it's too late and your IT services are negatively impacted! This often occurs in critical facilities where servers must be deployed or relocated fast, outside the normal working practices. In situations like this, assets can be inadvertantly misinstalled in the wrong racks. But there's no way for you to KNOW that a mistake has been made unless you have an asset tracking solution like iTRACS with RF Code. Without asset tracking, you may remain in the dark about these assets until it's too late and the complaints about poor or nonexistent service roll in. Not a pleasant place to find yourself.

Unauthorized assets can wreak havoc on your floor ...

These problem assets can:

  • Consume power without delivering quantifiable value back to the Business –­ under-utilized or literally unused
  • Take up valuable rack space for no purpose
  • Draw in network services for no purpose
  • Delay time-to-service (misinstalled servers), costing the Business in lost transactions, customer goodwill, revenue, and profitability
  • Potentially jeopardize availability (misinstalled assets), creating technical issues that can reverberate across the physical ecosystem
  • Waste money - both OPEX and CAPEX

Fortunately, there is an answer – DCIM featuring the iTRACS Converged Physical Infrastructure Management® (CPIM®) software suite with RF Code's real-time asset data inside.

Use DCIM to run a more efficient data center operation with tighter reins over your asset portfolio

Working together in a powerful DCIM partnership, iTRACS and RF Code can help make your rogue assets go away. Integration with RF Code lets iTRACS CPIM® users collect, manage, and analyze asset location information captured directly from RF Code's RFID sensors. RF Code's real-time asset location data – analyzed and visualized in iTRACS' award-winning DCIM management platform – lets users confirm the location of every asset in their portfolio, manage physical moves/adds/changes with 100% confidence, and conduct other asset management tasks over the lifecycle of their equipment. The sensors cannot lie. So rogue assets can no longer hide.

iTRACS and RF Code - the dream team in DCIM

Scenario: You need to add a bunch of new servers as fast as possible and you'd better do the install right the first time, since there's no margin for error when revenue is at stake. You need to make sure your technicians physically install the right servers in the right racks. The last thing you need are misplaced servers disrupting the environment and delaying services to the business. Watch this brief demo to see how iTRACS and RF Code are collaborating to help ensure that your assets are always where they're supposed to be.

Nate Silver, Rich Data, and DCIM

 

Last week I had the good fortune to attend a keynote presentation by Nate Silver. Silver’s presentation, entitled “The Signal and the Noise: Why So Many Predictions Fail, and Some Don’t” (not coincidentally this is also the title of his new book), addressed the availability of big data and how it affects decision-making.  Being a big time data wonk myself, I was pretty excited.

Yes, yes: I probably need to get out more.

Anyhow, one of Silver’s main points was that when we talk about “big data” we need to think of it as a mass of independent data points, and that without accurate correlation between multiple data points it can be worthless, or even destructive.  In his opinion it’s better to think the value of all this information by focusing not on big data, but on rich data. What’s the difference?  While big data refers to the massive, undifferentiated deluge of data points, rich data instead refers to data that has been processed and refined, eliminating the extraneous and leaving only the meaningful information that can then be used for predicting, planning, and decision making.

describe the imageThis nicely parallels the need for accurate, reliable, historic data to ensure efficient asset management and environmental monitoring in the data center.  In many cases the professionals that are responsible for monitoring and managing the data center are doing it with almost no data at all, let alone without rich data.  Often, the location of servers is only known based on a single data point gathered during the occasional inventory audit process, or the level of cooling required for the entire data center is set using a few sparsely distributed thermostats.  It’s very unusual for a data center manager to have the truly reliable rich data at hand that they need to ensure operational efficiency.

So where would big data center data be refined into rich data center data? Clearly, this would occur in a sophisticated back end system (asset management database, a DCIM platform, building/facilities management system, etc.). Without reliable systems in place that enable users to distill the vast quantities of undifferentiated big data flowing in from their various tag and sensors into rich data, the data center professional is left with the Herculean task of sifting through mountains of data points manually, searching for the signal in the noise.  Given the complexity and dynamism of the data center environment, undertaking this effort without reliable automated assistance seems unlikely to yield much in the way of results.

But no matter how good the system, it all starts with data. As Silver explained, to obtain rich data you must have quantity, quality and variety.  In other words: 

  • Quantity: you have to have enough data at hand to be able to discern patterns and trends

  • Quality: the data must be as accurate as possible

  • Variety: the data should be gathered from multiple sources in order to eliminate bias

So what would a data center need to generate truly rich data?  First and foremost, it's obvious that this data must be generated automatically: the effort that would be required to manually collect enough data about asset locations and environmental conditions and ensuring that it is both accurate and reasonably up to date would be overwhelming for all but the smallest of data centers.  So manually driven data collection processes (clip boards, bar codes, passive RFID tags, and so on) just won't cut it.

Clearly, what's needed is a combination of hardware that will automatically generate and deliver the needed data (active RFID anyone?) and software that helps the user to correlate the vast amount of received “big data” into rich data that can be used to perceive patterns and trends … and ultimately to make informed decisions.

  • For environmental monitoring applications such as managing power and cooling costs, identifying hot spots, ensuring appropriate air flow, generating thermal maps and so forth this requires an array of sensors that generate a continuous flow of accurate environmental readings -- temperature, humidity, air flow/pressure, power usage, leak detection, and so on. 

  • For asset management applications such as physical capacity planning, inventory management, asset security, lifecycle management and so on, this requires tags that automatically report the physical location of individual assets as well as any change in these locations without manual interaction.

The data generated by these devices would then be collected and refined in the backend system, resulting in meaningful, actionable information.  But the key point is that ultimately it’s the data that is the key component here. Without a continuous flow of accurate, reliable data about your data center assets and the environment that surrounds them – data that meets all of Silver’s requirements of quantity, quality, and variety – even the best DCIM platform will be of only limited value.

Download

Active RFID Tags in Data Center Asset Management: A Quick Overview

 

Data centers, with their countless racks and cabinets, may seem static at first glance, but these dynamic, unpredictable environments require precise and efficient asset management. Due to the high associated costs of manual inventory tracking and the inefficiency of inventory consolidation, data centers are increasingly turning to RFID to reduce costs, increase inventory accuracy, and improve overall operational efficiency. One recent example is the Social Security Administration’s announcement of a 90 percent reduction in the amount of labor required for tracking data center inventory and a 33 percent improvement in inventory accuracy after the adoption of RFID.

Active vs. Passive RFID: What's the Difference?

RFID technology is used to automatically track assets by sending radio waves to a reader. There are two types of RFID tags: active and passive. Active RFID tags have their own power source (i.e. a battery), while passive tags rely on a reader for power. The chief advantage found in choosing active RFID over passive is in automation: active RFID tags are able to continuously emit data whereas passive RFID tags must be manually scanned. While active and passive RFID tag technologies are frequently evaluated together, and passive solutions certainly provide great value in some sdeployment scenarios, active tags offer far greater value and functionality for data center asset tracking.  

Active vs. PassiveAutomation is the Key to Efficiency

With the average square footage of data centers between 10,000 and 25,000 ft2, manual asset tracking is no longer a viable option. Cisco recently published a case study chronicling their decision to shift from manual asset tracking to RFID tags to avoid error and decrease costs. Cisco IT chose to adopt active RFID tags, because passive RFID tags would lead to the same inconsistencies and labor associated with their previous spreadsheet-tracking method. Passive RFID tracking would require the team to manually scan tags in the event of equipment relocation—pulling valuable employees away from more vital tasks. Active RFID tags are able to transmit signal up to 300 feet, which means that stationary readers, such as those offered by RF Code, remain in a centralized location, as compared to passive tags which boast a read range of only 20-40 ft.

Real-Time Data Provides Continuous Visibility

Active RFID tags’ ability to detect changes in the data center environment facilitates the only truly up-to-date DCIM system. By independently monitoring data center assets, organizations are able to receive real-time updates unavailable with passive tags. RFID solutions offered by RF Code are built on open standards which allow the RFID system to be easily integrated into the center’s existing DCIM or ERP, thus preventing data system sprawl, unnecessary inventory reconciliation, and IT overhead.

Costs: Rapid ROI, Ongoing Savings

Finally, active RFID cost per unit may be higher than passive (for example, RF Code's asset tags are priced at under $20 per tag, while functionally similar passive tags typically run $5-15 per tag), but active RFID provides greater ROI (return on investment) and passive RFID is more typically used on lower-cost, smaller assets due to its tendency towards error and malfunction

Due to the higher cost of active RFID tags when compared with passive RFID tags the up-front cost of an active RFID solution deployment may be higher than a comparable passive RFID solution. However, once the costs of the handheld passive RFID readers and portals required to read the tags are factored in this price difference typically becomes far less significant ... or disappears altogether. More importantly, passive RFID solutions require ongoing, periodic manual efforts to energize the tags and collect the data they provide, requiring many costly person-hours as an ongoing annual expense. 

Conversely, the automated data provided by active RFID solutions requires no manual interaction and delivers continuous information about assets and their environment for the entire life of the tag or sensor. This automation ensures accuracy and prevents asset loss, while also freeing personnel who would typically be tasked with gathering asset data manually to perform more important, less time-consuming work.  The result is a very rapid return on investment (typically less than a year) and ongoing savings that result in a far lower total cost of ownership for active RFID solutions.


       

Efficient Data Center Power Monitoring Made Easy

 

This blog was originally posted by Julie Brown on Server Technology Inc's Data Center Power Blog.

spm combo resized 600

Data center managers today face a lot of the same challenges – they are trying to be more efficient and save money, which includes running devices at higher temperatures, so the need to monitor these environments is a lot more critical than it has been in the past.

The Data Center Power channel recently caught up with Calvin Nicholson, senior director of software and firmware here at Server Technology, to discuss the latest are trying to be more efficient and save money, which includes running devices at higher temperature
s, so the need to monitor these environments is a lot more critical than it has been in the past.

developments in our Sentry Power Manager (SPM), a rack-level data center power management software system that gives users power and environmental data, predicts where they might have future issues, lets them manage all PDUs from one dashboard and alerts them to and diagnoses problems.

Nicholson has been busy traveling cross-country, including a 7 city road show with RF Code and Nlyte Software, and meeting with different data center managers to figure out what their top concerns are and how Server Technology can help them overcome these challenges.

At Server Tech, we pride ourselves on providing the most accurate and reliable products in the industry, and a big part of what makes its solutions so successful is it builds products based on customer feedback and the need to solve real world data center problems .

As Nicholson explains, when it comes to top concerns in the data center, it’s almost the same old story – managers are concerned with efficiency, capacity planning, lowering power costs and doing more with less – but organizations are getting more creative in what they’re doing with solutions. We've just released the next version of SPM, version 5.3, which features some interesting additions.

For one, SPM now supports RF Code’s environmental monitoring capabilities. Now, users can bring in RF Code and not only have power monitoring functionality in the data center, but they can integrate with RF Code’s environmental monitoring for sensor information like airflow, pressure and humidity. SPM’s new version offers a single pane of glass combining Server Technology and RF Code. We first integrated with RF Code with its RFID tag into our PDUs, and now we are essentially bringing an API out of RF Code’s Zone Manager and bringing it into SPM.

You always hear - You can’t manage what you’re not monitoring. In many cases, data center managers are going to want to compare power information with other data. SPM has a new feature where users can compare the info it’s gathering against other power monitoring devices in the data center. This is important to ensure accuracy and increase efficiency.

Organizations of all industries are trying to do more with less and SPM makes that easy. Customers can use a combination of our PDUs and SPM and then automate discovering PDUs, firmware upgrades and auto-configure all the settings they want, in one location, at one time. Configuring the PDUs using SPM by using the SNAP template allows customers to set up critical parameters to save time, energy and personnel.

Data Center Efficiency: Flirting with Disaster

 

Some points to ponder:

  • “North American businesses lose $26.5 Billion annually from avoidable downtime”CA Technologies

  • “Global demand for datacenter electricity will quadruple by 2020"Greenpeace

  • “Enterprises that systematically manage the lifecycle of their IT assets will reduce costs by as much as 30% during the first year, and between 5 and 10% annually during the next five years.” – Patricia Adams, Gartner IT Asset Management and TCO Summit

Data center inefficiency affects all levels of the corporate hierarchy; from IT asset managers and data center directors, to accounting and finance professionals, right up to the folks sitting in the boardroom.  The combination of skyrocketing energy costs and the ever-increasing business demand for data and computing power have made it clear that improved data center asset management and power and cooling monitoring technologies aren’t just “nice to have”: they are vitally important solutions that are necessary for today’s businesses to thrive–or even survive.

icebergAnd yet adoption of automated asset management and real-time environmental monitoring technologies and DCIM platforms has proven to be a slow process. While the escalating costs associated with inefficiency are clearly understood and the dangers of downtime associated with lack of data center visibility and intelligence is all too apparent, it often takes organizations months–some even years–to choose a solution, test it, and finally deploy it.

Which begs the question: What are you waiting for?

You know that every day without up-to-date asset management data is another day that your capacity planning and management decision-making abilities are limited. Every day without accurate, real-time environmental monitoring is another day of inefficient, wasteful power and cooling policies. Every day without reliable data center visibility is another day of increased risk.

And yet we’ve found that many of our customers didn’t finally commit to full deployment until after they suffered a data center or business disaster. For some it was the creeping realization that writing off 15-20% of their physical data center assets each year because they simply couldn’t find them was crippling their ability to plan capacity and costing their organization millions of dollars annually. For others it was catastrophic losses caused by a crippling downtime event that could have been easily prevented with some temperature, humidity, and leak detection sensors. And for still others it was staggeringly high recurrent operating expenses coupled with a growing awareness of the environmental impact of data center power and cooling inefficiency.

All of these customers had one unfortunate thing in common: they all wished they’d deployed a comprehensive data center efficiency solution sooner. But despite clearly understanding the risks of operating their data centers in a “business as usual” mode, it took an event that transformed well-perceived financial risk into real-world financial loss for them to commit to modernizing and changing their policies.

So, what are the factors that are holding you back?  Is it budget?  Process re-engineering?  The prospect of educating and training staff on new tools?   Maybe you’re just having trouble choosing between the many options that are available in the marketplace.  Or is it convincing “The Powers That Be” that business-as-usual is not necessarily the best road forward?  

And most importantly: Will it take a data center disaster to overcome these obstacles? Let us know your thoughts and opinions in the comments below.

Data Center Uptime: Why 0.1% Makes a Difference

 

This blog entry was originally posted by Xiaotang Ma from our partner, Server Tech Inc. We've seen a lot of negative PR around downtime lately, affecting the likes of American Airlines, LinkedIn, and as of yesterday, the French government. Both the aforementioned disasters and the information contained in this blog highlight the importance of maintaining uptime. Even the slight increase from 99.9% to 100% availability could be the difference between a costly, publicized failure and assurance.  Here are some statistics we've come across recently which support this:

images resized 600 resized 600

  • According to RightScale, the average downtime duration is 7.5 hours.

  • According to the National Archives and Records administration in Washington, 93% of businesses that experienced downtime for more than 10 days went bankrupt in less than a year.

  • According to the Ponemon Institute, the cost of downtime is approximately $5,600 per minute.

Our integrations with leading DCIM partners such as Server Tech Inc can help take you from downtime to uptime, even if your availability only increases by 1%. Learn about our integration solution here.

- - - 

I finally decided to upgrade to a PlayStation 3 last summer after six years of quality video game time with my PS2.  I was amazed at the awesome new features the PS3 offered, but what I liked the most were the cool things you can download from the PlayStation Store, which included games, apps and even movies.  However, I have noticed a few instances where I could not get on the network to download things because of server downtime.  

Although it didn’t bother me very much, but it did bring up a good question: What might be some potential consequences if a company like Google experienced server downtime like this?

The Telecommunications Industry Association (TIA) is an organization accredited by the American National Standards Institute (ANSI)to develop industry standards for different information/communication technology products, which includes data centers.  In 2005, the ANSI/TIA-942 was published, which created four different tiers for data centers.  There are various differences between different tiered data centers, but for this blog post, we’ll just go briefly go over the availability, which is reflected in the table below.  

As you can see, the percentage of uptime between a Tier 1 and a Tier 4 data center appear to have a very minimal difference of 0.324%.  However, when you take into consideration the number of minutes in a year, a Tier 1 data center will have 28.44 more hours of downtime than a Tier 4.

Type % Availability Downtime (min) Downtime (hrs)
Tier 1 99.671% 1,730.37 28.84
Tier 2 99.741% 1,362.21 22.70
Tier 3 99.982% 94.67 1.58
Tier 4 99.995% 26.30 0.44

When selecting a tier to fit your business needs, some important questions to ask might be “Can my organization afford this many hours of downtime?” and “What are the potential consequences for this amount of downtime?”  For example, large companies in the banking and healthcare industries will not likely opt for a Tier 1 data center due to the potential implications of prolonged downtime.  It is imperative for data center administrators to be able to ask the right questions and select the most cost-effective option that fits organizational/industry business needs.

What are your thoughts?  Please feel free to share in the comments.

All Posts