Do you think that the data center from which Target’s stolen consumer data was compromised has been a lot like a scene out of RF Code Theater Production’s “Supernatural Info-tivity” lately? With 110 million victims in the November data breach (70M higher than the originally reported 40M), the Target data breach ushered in the age of the Big Data Theft to public consciousness and resulted in major losses for the company. Follow-up reports that Michaels and Neiman Marcus suffered similar attacks only further echoed that your data is everywhere and each time one of us swipes a payment card, we could potentially be sending our most vital details straight to a scary situation.
This month, details have been released that parallel the breach even closer with the typical horror movie. Just as we’ve seen in thousands of B-level scary movies when teenagers are told not to drive down that road or a girl is warned not to open that box, it was revealed that Target was warned. Two months prior to the huge data breach, IT security staff warned Target of potential vulnerabilities within their card payment system, and we wouldn’t be watching a horror movie if Target had heeded their advice.
True, there may not have been any gruesome deaths at the hands of a psychopath or demonic possession. Instead, the frights occurred when Target purchased credit protection plans for all 110 million victims, were struck with multiple law suits from financial institutions, stock prices fell to a 20-month low, and ultimately reported lower-than-expected 2013 Q4 results. With analysts predicting that the entire ordeal may cost the big box retailer more than $1 billion in fees, many on Wall Street and in Target’s boardrooms would describe the incident as…well… a bloodbath.
The Target situation goes far beyond strictly data center security, as malware was even installed on point-of-sale registers. These registers are all connected to remote management software and “protected” with what are now being reported as weak passwords. The industry’s defense has been to rally for more information about cyber threats, through the Retail Industry Leader Association’s Cybersecurity and Data Privacy Initiative and to encourage chip-and-PIN cards to replace the existing technology. Already the standard in Europe, these cards are supposedly more secure because they are difficult to counterfeit and provide encrypted information on-site. However, it’s worth noting that the switch to chip-and-PIN cards would be enormous to financial institutions (some already carrying the costly burden of compromised consumer information following the Target data breach) and that Target’s data thieves successfully stole encrypted PIN information.
With the potential for even more damaging breaches occurring as the role of data in our daily lives increases, RILA’s initiative as well as the exploration of more secure card technology is definitely a step in the right direction, but I can’t help to wish that more attention was being called to actual data center infrastructure management improvements. After all, warnings can be ignored, weapons and defenses can be used against you, and it’s usually the unlocked door or the flat tire that lets the bad guy sneak in the back door.
While deploying an automated asset management solution from RF Code can be an incredibly beneficial way to simplify audits, guarantee audit accuracy, ensure compliance and lower operational costs in your data center, there is obviously significant up-front cost associated with rolling out the asset tags, readers, and software necessary to generate and collate all of the asset location and lifecycle data. While RF Code customers like IBM and CME Group have reported rapid return on their asset management solution rollouts (both reporting ROI’s of less than 12 months, with ongoing savings thereafter), it’s just good fiscal sense to take a hard look at any potential ITAM solution and see where the biggest cost drivers are in deploying the solution, and what can be done to maximize both the rapid return on the initial deployment costs as well as prolonging the lifespan of the deployed solution without incurring additional costs.
In deploying a solution from RF Code, typically the total cost of the asset tags is the largest part of the total purchase cost. After all, each RF Code reader can read beacon data from thousands of RF Code asset tags and sensors within range (a single RF Code reader covers around 2500-5000 square feet depending on the characteristics of your data center or facility, more in open-air environments). So the total number of readers deployed in any given scenario will typically be fairly low, especially when compared with the aggregate cost of affixing our active RFID asset tags to each of your enterprise assets. Therefore, in order to achieve the best possible value it’s imperative that these asset tags perform reliably to deliver up front savings as quickly as possible and that they continue to deliver savings over the entire lifespan of the assets to which they’re affixed.
Here at RF Code, we took a lot of these concerns into account when we designed our asset management solutions. To ensure savings over the entire lifecycle of an enterprise asset (typically 3-5 years) we designed our asset tags to have a 5+ year lifespan, allowing an asset tag to be affixed to an asset as soon as it is purchased and deployed and ensuring that the asset is reliably tracked and monitored right up until it is retired at end-of-life. And the industrial-grade adhesive we use for affixing tags to assets made it nearly impossible for tags to accidentally be knocked loose or removed from assets, ensuring that they deliver value across the entire lifecycle of both the tag and the asset itself.
But in talking with our customers, some concerns came to light:
Because the asset tag is more-or-less permanently affixed to the asset, what if I need to retire an asset earlier than expected? Is there any way I can re-use the asset tag rather than just throwing it away?
What about malfeasance? If you’re actually reading signals sent by the asset tags, can’t someone just pull the tag off and then steal the asset without anyone knowing there’s a problem?
Our M174 asset tags directly addresses these issues with our replaceable installation tabs. These inexpensive installation tabs -– available in form factors designed for simple flag, loop, or thumb-screw installations to ensure they can be used on a broad variety of equipment – can be easily removed and replaced from the M174 tag housing itself. Therefore, if you need to retire an asset, but the asset tag affixed to it is still functional, you can simply cut the installation tab, replace it with a new one, and the re-deploy the asset tag for use with another asset. In this way you are guaranteed that each asset tag you purchase will be used for its entire lifespan, delivering savings and asset visibility all the while.
But what about ensuring asset security? Can’t people just cut these tabs or yank the asset tag off an asset and run? That’s where our new tamper tabs, also designed exclusively for use with the M174 asset tag, come in. RF Code tamper tabs – also available in flag, loop, and thumbscrew designs -- include a carbon fiber filament that is embedded in the adhesive of the tab. Once the tamper tab is connected to an M174 tag, the carbon fiber filament completes a tamper detection circuit. If this circuit is subsequently broken the tag will immediately begin broadcasting a tamper alert status. Once the tamper tab is applied to an asset the carbon fiber will be torn if the tag is cut, if the tag adhesive is peeled away from the asset or the tab itself, of it the M174 tag is pulled off the tamper tab by force. And because it takes over 8 pounds of force to pull the tag off of the tab, normal movement of assets will not break the circuit or trigger tamper alerts.
Want to learn more about RF Code’s new M174 asset tag? Read technical information about the tag here, or watch Chris Gaskins discuss the M174 tag and our new tamper tabs. And if you would like to discuss the ways in which these innovative solutions ensure efficiency and contribute to the bootom line comment below or contact us: we’d love to hear from you.
Guest Blog by William Bloomstein, Market Strategist for iTRACS
Data Center Infrastructure Management (DCIM) means many things to many people, depending on their short-term and long-range goals for the data center. For some, the immediate priority is reducing energy consumption and costs, contributing to the elimination of OPEX. For others, it's getting a better handle on space so they can optimize every square foot of their existing footprint, an important part of their strategy for reducing CAPEX. For still others, it's getting control over their IT asset inventory – what they have and where it is on the floor. Here, data center owners and operators around the world agree on a core tenant of IT asset management:
Any DCIM solution worth its salt has to have a strong asset tracking and location capability that can find, identify, and confirm the location of every asset on the floor.
This is where the DCIM partnership of iTRACS and RF Code shines.
You can't manage what you can't find.
It may not be as bad as worrying about the whereabouts of a teenager at night, but tracking down your servers and other IT assets is vital for data centers seeking firm control over their asset portfolio. It reduces time spent on asset management by your IT staff, accelerates audits, helps with lease management, and streamlines tech refresh and other data center initiatives.
So let me ask you this:
Do you know where all of your servers are located, what they are doing, and why they are there? And HOW do you know that you, indeed, know?
If you can't answer both of these questions with confidence, you need a joint DCIM solution from iTRACS and RF Code. Let me explain ...
There are two types of assets on the floor that commonly go undetected and/or unauthorized, jeopardizing the efficiency and well-being of your operation:
- Misplaced Assets. These are older "forgotten" assets sitting in your racks without your knowledge or control (out of your line of sight). You don't even know they are still hanging around. Perhaps the Line of Business removed its applications and left the physical servers to sit there uselessly, forgotten. Or the boxes' leases expired but no one did anything about it. In any event, these assets are the "lost children" of the IT portfolio. And if you don't asset-tag them, they may sit on your racks, unseen, needlessly consuming power and space, forever.
- Misinstalled Assets. These are assets that are installed in the wrong location but you may not know it until it's too late and your IT services are negatively impacted! This often occurs in critical facilities where servers must be deployed or relocated fast, outside the normal working practices. In situations like this, assets can be inadvertantly misinstalled in the wrong racks. But there's no way for you to KNOW that a mistake has been made unless you have an asset tracking solution like iTRACS with RF Code. Without asset tracking, you may remain in the dark about these assets until it's too late and the complaints about poor or nonexistent service roll in. Not a pleasant place to find yourself.
Unauthorized assets can wreak havoc on your floor ...
These problem assets can:
- Consume power without delivering quantifiable value back to the Business – under-utilized or literally unused
- Take up valuable rack space for no purpose
- Draw in network services for no purpose
- Delay time-to-service (misinstalled servers), costing the Business in lost transactions, customer goodwill, revenue, and profitability
- Potentially jeopardize availability (misinstalled assets), creating technical issues that can reverberate across the physical ecosystem
- Waste money - both OPEX and CAPEX
Fortunately, there is an answer – DCIM featuring the iTRACS Converged Physical Infrastructure Management® (CPIM®) software suite with RF Code's real-time asset data inside.
Use DCIM to run a more efficient data center operation with tighter reins over your asset portfolio
Working together in a powerful DCIM partnership, iTRACS and RF Code can help make your rogue assets go away. Integration with RF Code lets iTRACS CPIM® users collect, manage, and analyze asset location information captured directly from RF Code's RFID sensors. RF Code's real-time asset location data – analyzed and visualized in iTRACS' award-winning DCIM management platform – lets users confirm the location of every asset in their portfolio, manage physical moves/adds/changes with 100% confidence, and conduct other asset management tasks over the lifecycle of their equipment. The sensors cannot lie. So rogue assets can no longer hide.
iTRACS and RF Code - the dream team in DCIM
Scenario: You need to add a bunch of new servers as fast as possible and you'd better do the install right the first time, since there's no margin for error when revenue is at stake. You need to make sure your technicians physically install the right servers in the right racks. The last thing you need are misplaced servers disrupting the environment and delaying services to the business. Watch this brief demo to see how iTRACS and RF Code are collaborating to help ensure that your assets are always where they're supposed to be.
Last week I had the good fortune to attend a keynote presentation by Nate Silver. Silver’s presentation, entitled “The Signal and the Noise: Why So Many Predictions Fail, and Some Don’t” (not coincidentally this is also the title of his new book), addressed the availability of big data and how it affects decision-making. Being a big time data wonk myself, I was pretty excited.
Yes, yes: I probably need to get out more.
Anyhow, one of Silver’s main points was that when we talk about “big data” we need to think of it as a mass of independent data points, and that without accurate correlation between multiple data points it can be worthless, or even destructive. In his opinion it’s better to think the value of all this information by focusing not on big data, but on rich data. What’s the difference? While big data refers to the massive, undifferentiated deluge of data points, rich data instead refers to data that has been processed and refined, eliminating the extraneous and leaving only the meaningful information that can then be used for predicting, planning, and decision making.
This nicely parallels the need for accurate, reliable, historic data to ensure efficient asset management and environmental monitoring in the data center. In many cases the professionals that are responsible for monitoring and managing the data center are doing it with almost no data at all, let alone without rich data. Often, the location of servers is only known based on a single data point gathered during the occasional inventory audit process, or the level of cooling required for the entire data center is set using a few sparsely distributed thermostats. It’s very unusual for a data center manager to have the truly reliable rich data at hand that they need to ensure operational efficiency.
So where would big data center data be refined into rich data center data? Clearly, this would occur in a sophisticated back end system (asset management database, a DCIM platform, building/facilities management system, etc.). Without reliable systems in place that enable users to distill the vast quantities of undifferentiated big data flowing in from their various tag and sensors into rich data, the data center professional is left with the Herculean task of sifting through mountains of data points manually, searching for the signal in the noise. Given the complexity and dynamism of the data center environment, undertaking this effort without reliable automated assistance seems unlikely to yield much in the way of results.
But no matter how good the system, it all starts with data. As Silver explained, to obtain rich data you must have quantity, quality and variety. In other words:
Quantity: you have to have enough data at hand to be able to discern patterns and trends
Quality: the data must be as accurate as possible
Variety: the data should be gathered from multiple sources in order to eliminate bias
So what would a data center need to generate truly rich data? First and foremost, it's obvious that this data must be generated automatically: the effort that would be required to manually collect enough data about asset locations and environmental conditions and ensuring that it is both accurate and reasonably up to date would be overwhelming for all but the smallest of data centers. So manually driven data collection processes (clip boards, bar codes, passive RFID tags, and so on) just won't cut it.
Clearly, what's needed is a combination of hardware that will automatically generate and deliver the needed data (active RFID anyone?) and software that helps the user to correlate the vast amount of received “big data” into rich data that can be used to perceive patterns and trends … and ultimately to make informed decisions.
For environmental monitoring applications such as managing power and cooling costs, identifying hot spots, ensuring appropriate air flow, generating thermal maps and so forth this requires an array of sensors that generate a continuous flow of accurate environmental readings -- temperature, humidity, air flow/pressure, power usage, leak detection, and so on.
For asset management applications such as physical capacity planning, inventory management, asset security, lifecycle management and so on, this requires tags that automatically report the physical location of individual assets as well as any change in these locations without manual interaction.
The data generated by these devices would then be collected and refined in the backend system, resulting in meaningful, actionable information. But the key point is that ultimately it’s the data that is the key component here. Without a continuous flow of accurate, reliable data about your data center assets and the environment that surrounds them – data that meets all of Silver’s requirements of quantity, quality, and variety – even the best DCIM platform will be of only limited value.
Data centers, with their countless racks and cabinets, may seem static at first glance, but these dynamic, unpredictable environments require precise and efficient asset management. Due to the high associated costs of manual inventory tracking and the inefficiency of inventory consolidation, data centers are increasingly turning to RFID to reduce costs, increase inventory accuracy, and improve overall operational efficiency. One recent example is the Social Security Administration’s announcement of a 90 percent reduction in the amount of labor required for tracking data center inventory and a 33 percent improvement in inventory accuracy after the adoption of RFID.
Active vs. Passive RFID: What's the Difference?
RFID technology is used to automatically track assets by sending radio waves to a reader. There are two types of RFID tags: active and passive. Active RFID tags have their own power source (i.e. a battery), while passive tags rely on a reader for power. The chief advantage found in choosing active RFID over passive is in automation: active RFID tags are able to continuously emit data whereas passive RFID tags must be manually scanned. While active and passive RFID tag technologies are frequently evaluated together, and passive solutions certainly provide great value in some sdeployment scenarios, active tags offer far greater value and functionality for data center asset tracking.
Automation is the Key to Efficiency
With the average square footage of data centers between 10,000 and 25,000 ft2, manual asset tracking is no longer a viable option. Cisco recently published a case study chronicling their decision to shift from manual asset tracking to RFID tags to avoid error and decrease costs. Cisco IT chose to adopt active RFID tags, because passive RFID tags would lead to the same inconsistencies and labor associated with their previous spreadsheet-tracking method. Passive RFID tracking would require the team to manually scan tags in the event of equipment relocation—pulling valuable employees away from more vital tasks. Active RFID tags are able to transmit signal up to 300 feet, which means that stationary readers, such as those offered by RF Code, remain in a centralized location, as compared to passive tags which boast a read range of only 20-40 ft.
Real-Time Data Provides Continuous Visibility
Active RFID tags’ ability to detect changes in the data center environment facilitates the only truly up-to-date DCIM system. By independently monitoring data center assets, organizations are able to receive real-time updates unavailable with passive tags. RFID solutions offered by RF Code are built on open standards which allow the RFID system to be easily integrated into the center’s existing DCIM or ERP, thus preventing data system sprawl, unnecessary inventory reconciliation, and IT overhead.
Costs: Rapid ROI, Ongoing Savings
Finally, active RFID cost per unit may be higher than passive (for example, RF Code's asset tags are priced at under $20 per tag, while functionally similar passive tags typically run $5-15 per tag), but active RFID provides greater ROI (return on investment) and passive RFID is more typically used on lower-cost, smaller assets due to its tendency towards error and malfunction.
Due to the higher cost of active RFID tags when compared with passive RFID tags the up-front cost of an active RFID solution deployment may be higher than a comparable passive RFID solution. However, once the costs of the handheld passive RFID readers and portals required to read the tags are factored in this price difference typically becomes far less significant ... or disappears altogether. More importantly, passive RFID solutions require ongoing, periodic manual efforts to energize the tags and collect the data they provide, requiring many costly person-hours as an ongoing annual expense.
Conversely, the automated data provided by active RFID solutions requires no manual interaction and delivers continuous information about assets and their environment for the entire life of the tag or sensor. This automation ensures accuracy and prevents asset loss, while also freeing personnel who would typically be tasked with gathering asset data manually to perform more important, less time-consuming work. The result is a very rapid return on investment (typically less than a year) and ongoing savings that result in a far lower total cost of ownership for active RFID solutions.
This blog was originally posted by Julie Brown on Server Technology Inc's Data Center Power Blog.
Data center managers today face a lot of the same challenges – they are trying to be more efficient and save money, which includes running devices at higher temperatures, so the need to monitor these environments is a lot more critical than it has been in the past.
The Data Center Power channel recently caught up with Calvin Nicholson, senior director of software and firmware here at Server Technology, to discuss the latest are trying to be more efficient and save money, which includes running devices at higher temperature
s, so the need to monitor these environments is a lot more critical than it has been in the past.
developments in our Sentry Power Manager (SPM), a rack-level data center power management software system that gives users power and environmental data, predicts where they might have future issues, lets them manage all PDUs from one dashboard and alerts them to and diagnoses problems.
Nicholson has been busy traveling cross-country, including a 7 city road show with RF Code and Nlyte Software, and meeting with different data center managers to figure out what their top concerns are and how Server Technology can help them overcome these challenges.
At Server Tech, we pride ourselves on providing the most accurate and reliable products in the industry, and a big part of what makes its solutions so successful is it builds products based on customer feedback and the need to solve real world data center problems .
As Nicholson explains, when it comes to top concerns in the data center, it’s almost the same old story – managers are concerned with efficiency, capacity planning, lowering power costs and doing more with less – but organizations are getting more creative in what they’re doing with solutions. We've just released the next version of SPM, version 5.3, which features some interesting additions.
For one, SPM now supports RF Code’s environmental monitoring capabilities. Now, users can bring in RF Code and not only have power monitoring functionality in the data center, but they can integrate with RF Code’s environmental monitoring for sensor information like airflow, pressure and humidity. SPM’s new version offers a single pane of glass combining Server Technology and RF Code. We first integrated with RF Code with its RFID tag into our PDUs, and now we are essentially bringing an API out of RF Code’s Zone Manager and bringing it into SPM.
You always hear - You can’t manage what you’re not monitoring. In many cases, data center managers are going to want to compare power information with other data. SPM has a new feature where users can compare the info it’s gathering against other power monitoring devices in the data center. This is important to ensure accuracy and increase efficiency.
Organizations of all industries are trying to do more with less and SPM makes that easy. Customers can use a combination of our PDUs and SPM and then automate discovering PDUs, firmware upgrades and auto-configure all the settings they want, in one location, at one time. Configuring the PDUs using SPM by using the SNAP template allows customers to set up critical parameters to save time, energy and personnel.
Some points to ponder:
“North American businesses lose $26.5 Billion annually from avoidable downtime” – CA Technologies
“Global demand for datacenter electricity will quadruple by 2020" – Greenpeace
“Enterprises that systematically manage the lifecycle of their IT assets will reduce costs by as much as 30% during the first year, and between 5 and 10% annually during the next five years.” – Patricia Adams, Gartner IT Asset Management and TCO Summit
Data center inefficiency affects all levels of the corporate hierarchy; from IT asset managers and data center directors, to accounting and finance professionals, right up to the folks sitting in the boardroom. The combination of skyrocketing energy costs and the ever-increasing business demand for data and computing power have made it clear that improved data center asset management and power and cooling monitoring technologies aren’t just “nice to have”: they are vitally important solutions that are necessary for today’s businesses to thrive–or even survive.
And yet adoption of automated asset management and real-time environmental monitoring technologies and DCIM platforms has proven to be a slow process. While the escalating costs associated with inefficiency are clearly understood and the dangers of downtime associated with lack of data center visibility and intelligence is all too apparent, it often takes organizations months–some even years–to choose a solution, test it, and finally deploy it.
Which begs the question: What are you waiting for?
You know that every day without up-to-date asset management data is another day that your capacity planning and management decision-making abilities are limited. Every day without accurate, real-time environmental monitoring is another day of inefficient, wasteful power and cooling policies. Every day without reliable data center visibility is another day of increased risk.
And yet we’ve found that many of our customers didn’t finally commit to full deployment until after they suffered a data center or business disaster. For some it was the creeping realization that writing off 15-20% of their physical data center assets each year because they simply couldn’t find them was crippling their ability to plan capacity and costing their organization millions of dollars annually. For others it was catastrophic losses caused by a crippling downtime event that could have been easily prevented with some temperature, humidity, and leak detection sensors. And for still others it was staggeringly high recurrent operating expenses coupled with a growing awareness of the environmental impact of data center power and cooling inefficiency.
All of these customers had one unfortunate thing in common: they all wished they’d deployed a comprehensive data center efficiency solution sooner. But despite clearly understanding the risks of operating their data centers in a “business as usual” mode, it took an event that transformed well-perceived financial risk into real-world financial loss for them to commit to modernizing and changing their policies.
So, what are the factors that are holding you back? Is it budget? Process re-engineering? The prospect of educating and training staff on new tools? Maybe you’re just having trouble choosing between the many options that are available in the marketplace. Or is it convincing “The Powers That Be” that business-as-usual is not necessarily the best road forward?
And most importantly: Will it take a data center disaster to overcome these obstacles? Let us know your thoughts and opinions in the comments below.
This blog entry was originally posted by Xiaotang Ma from our partner, Server Tech Inc. We've seen a lot of negative PR around downtime lately, affecting the likes of American Airlines, LinkedIn, and as of yesterday, the French government. Both the aforementioned disasters and the information contained in this blog highlight the importance of maintaining uptime. Even the slight increase from 99.9% to 100% availability could be the difference between a costly, publicized failure and assurance. Here are some statistics we've come across recently which support this:
According to RightScale, the average downtime duration is 7.5 hours.
According to the National Archives and Records administration in Washington, 93% of businesses that experienced downtime for more than 10 days went bankrupt in less than a year.
According to the Ponemon Institute, the cost of downtime is approximately $5,600 per minute.
Our integrations with leading DCIM partners such as Server Tech Inc can help take you from downtime to uptime, even if your availability only increases by 1%. Learn about our integration solution here.
- - -
I finally decided to upgrade to a PlayStation 3 last summer after six years of quality video game time with my PS2. I was amazed at the awesome new features the PS3 offered, but what I liked the most were the cool things you can download from the PlayStation Store, which included games, apps and even movies. However, I have noticed a few instances where I could not get on the network to download things because of server downtime.
Although it didn’t bother me very much, but it did bring up a good question: What might be some potential consequences if a company like Google experienced server downtime like this?
The Telecommunications Industry Association (TIA) is an organization accredited by the American National Standards Institute (ANSI)to develop industry standards for different information/communication technology products, which includes data centers. In 2005, the ANSI/TIA-942 was published, which created four different tiers for data centers. There are various differences between different tiered data centers, but for this blog post, we’ll just go briefly go over the availability, which is reflected in the table below.
As you can see, the percentage of uptime between a Tier 1 and a Tier 4 data center appear to have a very minimal difference of 0.324%. However, when you take into consideration the number of minutes in a year, a Tier 1 data center will have 28.44 more hours of downtime than a Tier 4.
When selecting a tier to fit your business needs, some important questions to ask might be “Can my organization afford this many hours of downtime?” and “What are the potential consequences for this amount of downtime?” For example, large companies in the banking and healthcare industries will not likely opt for a Tier 1 data center due to the potential implications of prolonged downtime. It is imperative for data center administrators to be able to ask the right questions and select the most cost-effective option that fits organizational/industry business needs.
What are your thoughts? Please feel free to share in the comments.
ASHRAE's latest thermal guidelines present data center operators with the opportunity to significantly reduce their power and cooling expenses. By carefully monitoring the temperatures in and around your data center assets you can safely increase temperature set points without increasing the risk of equipment failure and downtime.
The question, then, is how many RF Code sensors do you need in order to comply with the guidelines and begin to realize these savings? There are two answers: what is the minimum coverage necessary to meet ASHRAE guidelines, and what level of coverage will provide the greatest ability to control power and cooling costs.
To meet the minimal ASHRAE guidelines, your must deploy:
- At least 1 RF Code M250 Fixed Reader (how many readers are required will depend on the size of your data center)
- 3 R150-5 High-Performance Temperature Tags every third rack in your data center
- Sensors must be installed in the front of the rack, with one tag at the top, one in the center, and the third in the bottom of the rack
This level of instrumentation provides adequate temperature monitoring across the data center and also enables computation of the Rack Cooling Index (RCI), a metric that enables you to ensure that you are complying with the ASHRAE guidelines. However, it does not provide visibility into the microenvironment that exists in each rack, instead providing only an overall measurement of the ambient temperatures around your racks.
While minimal coverage may meet the ASHRAE guidelines and enable you to realize some savings, in order to maximize your savings while also minimizing the possibility of damage to your equipment, more fine-grained monitoring is necessary.
For full ASHRAE guideline compliance coupled with optimal data center power and cooling savings, we recommend:
- At least 1 RF Code M250 Fixed Reader (how many readers are required will depend on the size of your data center)
- 4 R150-5 High-Performance Temperature Tags in each rack in your data center
- 2 R155 High-Performance Humidity-Temperature Tags in each rack in your data center
- Sensors must be installed in both the front and back of each rack, with one temperature sensor at the top front/top back, one temperature-humidity sensor in the center front/center back, and the third temperature sensor in the bottom front/bottom back of the rack
This level of instrumentation provides complete visibility into the environmental conditions in each of your racks, helping you to detect hot spots that could lead to failures as data center temperatures rise. In addition to enabling computation of the Rack Cooling Index, this level of instrumentation also enables you to calculate the Return Temperature Index (RTI). RTI, a metric that enables you to measure the efficiency of your air handling systems, greatly increases your ability to reduce cooling costs by helping to ensure that you are cooling your data center air precisely as much as is needed to adequately cool your equipment.
Data isn’t just big; it’s massive. With each passing minute, a staggering 100 hours of video are uploaded to YouTube. Google receives over 2,000,000 search queries. Email users send over 200,000,000 messages. Apple receives 47,000 app downloads. Get the picture? Clearly the omnipresent “big data” is quickly manifesting itself into “massive data.”
According to 451, “DCIM, or Data Center Infrastructure Management, is one of the most disruptive technologies this year. In 2012, the DCIM market generated around $429M in revenue and had record demand in the first quarter of 2013.” Furthermore, DCIM sales are expected to grow 42% to reach $1.8B in aggregate revenue by 2013. As a result of the increasing demand for DCIM to manage Big Data, it’s clear that what once was a trend for automation in the datacenter has now become a need that will only grow in the coming years.
But with growth comes confusion. Compounding the issue, DCIM vendors and conflicting information are scattered across the web. With all the marketing hype about “solutions,” here are the facts to help break this down for you:
DCIM promises: capacity planning, energy efficiency, airflow optimization, asset management, and thermal mapping. It offers to control and automate your data ecosystem, granting you governance and operational efficiency. But does it really? Without real-time information, you only know what DID happen and not what IS happening in the datacenter.
The ability to monitor your environmental conditions is vital, but they will not be accurate in the absence of real-time monitoring. Knowing the location of a specific rack server is crucial for operational planning and efficiency, but if you do not have these items tagged, then your only alternative is to walk throughout the datacenter in search of the rack. Without real-time updates, it becomes nearly impossible to manage critical events in the datacenter. Leaks must be detected, heat must be managed, and airflow must be regulated. In the event of a threat to an unplanned outage, an IT manager would need to react to a potential disaster right away. However, without timely reporting, they could risk IT equipment failure costs that would average $5,600 per minute and $505,500 per event!
Think about it this way: DCIM that’s not in real-time is like buying a brand new Ferrari that you can’t fill up with gas. Despite its sleek appearance, you won’t be taking full advantage of accelerating that V12 engine if your shiny new car sits motionless. Similarly, DCIM is trendy and attractive to incorporate in your datacenter, but it’s simply not complete without real-time asset tracking and environmental monitoring. Without our RF Code real-time solution, DCIM is really only DCI: Datacenter Infrastructure, lacking in Management. This is why we “turbo-charge” your datacenter for optimization. We put the “M” in “DCIM.”
Sources: 451 Research, Mashable, Datacenter Journal, Emerson Network Power