Subscribe to the IT Asset Monitor

Receive Blog Updates via Email

Your email:

Follow Me

The Data Driven Data Center

Current Articles | RSS Feed RSS Feed

IT Starts with Information, Part 2: Capacity Planning, Improved ROI, Increased Productivity

 

In part one of our blog series we discussed real-time monitoring’s position within the data center management chain. In this second blog, we analyze how to use this data to dramatically improve TCO through the continuous visibility, management and accurate provisioning of IT assets.

Our recent white paper addresses these concepts in more depth, explaining how real-time monitoring, capacity planning and predictive analysis technologies help businesses improve data center agility and efficiency to ensure higher performance at a lower cost.


Asset management is a key feature of data center capacity planning, however for many organizations the activity translates to basic record keeping and outdated collection methods.

Recording the name and location of every piece of IT equipment in the data center into a spreadsheet, by hand, is a labor-intensive, expensive and error-prone way to track valuable assets, especially as the data center is a dynamic environment.

Equipment is moved every day; devices are taken offline for maintenance, and new equipment is deployed. Many data centers that choose to track assets manually are attempting to solve a modern problem using a methodology that dates back to ancient Mesopotamia.

A data center that employs a static, manually maintained system like this requires staff to physically walk around the facility when conducting inventory audits. If a device is missing from its last recorded location or if information about the device conflicts with existing records, staff must investigate, but without any assistance into where to start.

This lack of visibility compromises productivity, lowers morale and increases costs, and makes capacity decisions the equivalent of working in the dark.

Keeping the Lights On

Now consider a data center with a real-time, wire-free asset management system. In this data center, the user knows the exact location of every piece of equipment in the facility. They can drill down to specifications, maintenance and warranty histories for every device. Assets can be visible on a floor plan and in context, with power paths, network connections and dependencies clearly mapped.

describe the imageThis business fully understands the current position and status of every piece of equipment in the data center whether it is connected to a rack, or not.

Real-time monitoring data can be correlated with asset information to detect stranded capacity (for example, power is available in a given area but cooling is at its limit). “What if?” scenarios for new configurations can be modelled and predictions into what would occur if an asset were to fail can take place. This is a data center that is consistently available to perform today and is ready to meet the demands of tomorrow.

On a tactical level, real-time asset management
means less time wasted in inventory reconciliation, fewer penalties from late lease returns, and smaller equipment replacement budgets. Warranty and depreciation information is readily available, audits
are streamlined and change management is simplified.

On a more strategic level, asset management systems facilitate more effective capacity planning, prevent over-provisioning and allow the redeployment of assets that could otherwise be sitting in storage areas unused, depreciating in value.

Those that question whether capacity planning is an important concern should consider the data. A recent study found that 63% of those interviewed indicated that they would run out in the next 2-5 years. Building new capacity is expensive – a data center can cost $5-10 million per MW – and co-location prices do not include electricity, bandwidth, staff or migration costs.

A company that buys new compute capacity simply because it is unable to identify whether sufficient capacity already exists is making two expensive mistakes: it is wasting money on something it does not need, and it is taking funds from other initiatives that could drive business growth.

There is an answer to this challenge, and as we explore in the final part of the series, it resides in the predictive analysis of data center monitoring data.

Download "IT Starts with Information"  

IT Starts with Information, Part 1: Data Center Monitoring Ensures Availability, Lowers OpEx

 

Downtime avoidance and disaster prevention are critical aspects of data center efficiency. Most operators use a multi-pronged approach to address these issues, incorporating redundant systems in their design, applying best practices for operational and maintenance procedures, and using innovative and integrative technologies such as data center infrastructure management (DCIM) systems to improve reliability.

Our recent white paper focuses on these concepts and explains how real-time monitoring, capacity planning and predictive analysis technologies help businesses improve data center agility and efficiency to ensure higher performance at a lower cost. In this first of a multi-part blog series, we take a look at the position real-time monitoring has in the data center management chain.


Data centers are the engines of commerce. They enable every aspect of modern civilization, from social connections to global markets, yet for something so ubiquitous, data centers are strangely invisible and poorly understood.

Most enterprise data centers spend millions of dollars on electricity each year. That’s real money in any economy, much less in today’s highly competitive global landscape. Add in rising energy costs throughout much of the industrialized world and the increased potential for climate change-related regulation, and one would think most companies would consider improved energy efficiency their highest priority.

A recent survey by the Data Center Users’ Group found differently. Respondents ranked energy efficiency fourth in priority. The top concerns? Adequate monitoring and data center management capabilities, availability, and change management.

The common denominator of the above is ensuring availability. A seemingly minor change or miscalculation can have massive implications and the prospect of equipment failure is the type of concern that keeps executives up at night.

The data center cannot go offline, not least because of the expense. A 2013 Ponemon Institute survey reported an average cost of $690,204 per incident and that only includes readily quantifiable impacts – business disruption, lost revenue, decreased productivity, equipment repair and the like. The cost to a business’s reputation is harder to measure, but it lasts longer and affects the bottom line far more than the expense of the actual event.

Understanding the Concepts at Hand

resl time monitoringThere are many techniques and technologies data center operators can employ to save energy in their facilities. Recent guidance from ASHRAE shows that today, data centers, which used to operate at 55-65 ̊F, can run at 80 or even 90 ̊F, and with less stringent humidity limits. The impact of these changes can be significant: for every 1 ̊F increase in server inlet air temperature, 2-5% in energy costs can be saved.

Given the substantial savings on offer, one might expect every business to make these seemingly straightforward adjustments. Yet many do not; over three-quarters of the respondents to the 2013 Uptime Institute survey reported that their average server supply air temperature was 65-75 ̊F – far cooler than ASHRAE recommends.

Power distribution and backup equipment also contributes to energy waste in the data center, due to conversion losses, poorly designed power chains and inefficient power supplies and cables. However as with cooling, there are many strategies that can help improve power efficiency, the most obvious of which is from a compute perspective.

Because most data centers provision for peak load – loads that may occur only a few days per year – low server utilization is the status quo in the industry. Experts estimate that server utilization in most data centers is only 12-18%. These “comatose” servers still draw almost the same amount of power when idle as they do when processing at full capacity. Additionally, every watt of electricity wasted at the device level has a cascade effect, as still more energy is needed to power the physical infrastructure that supports the device.

Increasing the density of the IT load per rack through consolidation and virtualization can offer substantial savings, not only in equipment but also electricity and space. So why are data center operators leaving these potentially game-changing savings on the table?

Risk. Without real-time monitoring and management, raising inlet air temperature increases the risk of equipment failure. Without a detailed understanding of the relationship between compute demand and power dynamics in the data center, power capping increases the risk that the processing capacity won’t be available when required.

Business-Critical Monitoring Infrastructure

In an intelligent data center thousands of sensors throughout the facility collect information on temperature, humidity, air pressure, power use, fan speeds, CPU utilization, and much more – all in real time. This information is aggregated, normalized and reported in ways that allow the operator to understand and adjust controls in response to current conditions.

Monitoring also offers benefits beyond disaster avoidance. Cloud, co-location and hosting providers can use the data collected to document their compliance with service level agreements (SLAs). Monitoring data can be integrated into the facility’s building management system (BMS), allowing operators to further automate and optimize control of the physical environment.

Visibility at a macro and micro level improves client confidence, streamlines decision making and increases data center availability, productivity and energy efficiency.

These are all powerful outcomes, however they are only achievable when monitoring data is united with other aspects of data center management. In our next blog we look at one specific application - capacity planning - and the role predictive analysis plays in correlating capacity-related data.


Want to learn more? Read our new whitepaper "IT Starts with Information" for an in-depth look at how real-time monitoring, capacity planning and predictive analysis technologies drive data center agility and efficiency while dramatically improving TCO through the continuous visibility, management and accurate provisioning of IT assets.

Download "IT Starts with Information"

 

Infographic: The Hidden Costs of Manual Data Collection

 

Many data centers rely on manual processes like bar code scanning and passive RFID solutions to gather and record information about the location, life cycle, and security of their valuable data center assets. These methods are costly, time consuming and error prone. Worse, because it takes so much time to collect the data audits are typically performed only once or twice a year. The result? Strategic, business-critical decisions are often made based on inaccurate, out of date information.

costs of manual tracking infographic resized 600

Download this infographic as a PDF or View an animated Prezi of this infographic.

Want to learn more about how automated asset management drives business value and rapid ROI? Download our white paper "Asset Vulnerability: The Six Greatest Risks Facing IT Asset Inventory and Management — and the Single Automated Solution" today.

Download

RF Code Sets the Data Center Agenda Across the World

 

RF Code’s executive team continued its long-standing legacy for industry leadership with packed conferences in North America, EMEA and Asia. 

richard jenkins uptime

The highlight was a panel discussion I was invited to at HCTS North America. Titled The Data Center of 2020 and chaired by 451 Research Vice President of Research for Datacenter Technologies and Eco-Efficient IT, Andy Lawrence, I introduced our vision of a data center that is the driver of corporate strategy and success. 

It was clear from audience feedback that many of the world’s largest enterprises are still struggling to meet the operational challenges facing their bottom lines. Enterprises need strategic insight that delivers real outcomes, not just the inflated marketing materials that some providers in the industry churn out.

This is why we run regular executive briefings with the world’s most influential companies to discuss their data-led strategies.

The most recent sessions took place in Hong Kong, Shanghai and Taiwan with a clear objective - through informed discussions, enterprises can visualize a clear strategy to improve their own facilities’ sustainability and efficiency. Current enterprise success stories include: 

  • HP - 14,000 sensors are delivering power savings of $300,000 a month
  • IBM - 300,000 assets being tracked using RF Code Asset Lifecycle Management - $40m saved over 5 years, 100% asset visibility
  • CenturyLink - $15m+ energy related savings expected by 2018, ROI in 11 months

Our next executive breakfast will be held in London on November 18 2014 ahead of our attendance of DCD London. Please let us know if you would like to attend the exclusive C-Level event.

There are still opportunities to hear us speak in 2014 as we will be at:

Want to see how CenturyLink's RF Code deployment has resulted in millions of dollars in ongoing savings, and improved service offerings for customers? Cleick below to watch their presentation from earlier this year. 

 

New Call-to-action
 

 

 

 

Infographic: Five Facts About Asia-Pacific Data Center Energy Use

 

Energy use in data centers throughout APAC is rising to match skyrocketing demand. Take a look at some of the ways energy efficiency -- or the lack of it -- is affecting data centers and managed service providers in the region. Click on the infographic to view it full-size in another window.

APAC DC Energy Use Infographic

(Download this infographic as a PDF)

Want to learn more?  Download this case study and see how RF Code partner QDS was engaged by one of Hong Kong’s largest power companies to improve the energy efficiency of its data centers.

New Call-to-action

Data Centers: Economic Powerhouses with an Image Problem

 

The infamous New York Times periodical The Cloud Factories was a tirade against the (supposed) modern equivalent of the coal power plant, the data center. The residing image was a hot, energy-waster, a vision that still remains today.

This is a misplaced example of misrepresentation and inaccurate press sensationalism. It demonstrates that the public, press included, understand little about how data centers function, are regulated and the drivers behind improvement. In the past facilities were wasteful and some still consume significant amounts of energy, but inefficiency is becoming a thing of the past.

The data center sector is striving to improve its long-term impact through globally coordinated regulation, efficiency strategies and operational transparency. Change is occurring at all levels. Facebook has made its data center design and operational systems open to public scrutiny, renewable energy is now a real fuel alternative in certain regions, and compliance is keeping the heat on businesses rather than their IT equipment.

Government Backing

The sheer level of change occurring highlights the united front against environmental damage and financial waste.

Regulation is the first step behind any shift in operational process, which is why the U.S. Department of Energy has committed to educate federal data center operators around environmental best practice. This centralized governance intends to reduce energy usage 20% by 2020. If the initiative is effectively adopted, energy savings could reach 12 billion KWh/year, 2% of the entire country’s annual lighting volume ($2 billion potential savings).

Data centers have a social responsibility to slash their power intake, as current usage levels are unsustainable. Global energy consumption is expected to reach 41.86 quadrillion BTU in 2014 (U.S. Energy Information Administration (EIA)), a 1% increase over 2013; the equivalent of 8 trillion gallons of gasoline or 36 million tonnes of coal.

That is why other regions focus on hard compliance. The EU Commission’s 2012 Energy Efficiency Directive came into force in June, leaving facilities little choice but to invest in technology-led strategies to drive efficiencies across the IT environment. The aim mirrors the DoE; a 20% reduction by 2020 across all sections of the energy distribution chain.

For the UK it is money driving action. The Climate Change Agreement of December 2013 has pushed co-location companies to improve their environmental footprints through financial incentives, a feat made possible by sensor networks that generate real-time visibility. As the report states, “If data centers do not meet targets, they pay the government £12 per tonne of CO2 by which the target has been missed. The financials are quite obvious.

The Wider Benefits

Organizations have the opportunity to lead by example. Boardrooms do not want to be labeled inefficient and increased competition is leading many businesses to think strategically. They are driving IT to undertake a programme of data center optimization, especially as the positive business outcomes stretch further than green credentials and PR exposure.

ASHRAE guidelines state that a single degree Fahrenheit increase in the data center can yield yearly savings of 2% and when you spend $80 million a year on power like CenturyLink does, that 2% offers vast commercial and reinvestment potential. The average data center could increase temperatures by 10 degrees. That is millions waiting to be put back on the bottom line.

Other examples include Digital Fortress, whose co-location operations are now environmentally stable. With real-time RF Code data, customers have insurmountable proof it is meeting SLA requirements and of its own facility’s sustainability. Elsewhere a leading Hong Kong power company is passing on its savings directly to customers now it has greater operational control.

This should be the image of data centers: serving mankind, not harming it. They do more for our every day existence than nearly any other sector and deserve celebration. Throughout their lifespan, facilities generate thousands of jobs, help local economies and improve our lives. That is a story worth writing about

Customer Case Study: Digital Fortress

The Data Center: A Matter of Life or Death

 

Note: An edited version of this piece was published in IQ by Intel on October 1, 2014 as "Are You Data Center Dependant?" 

Last month a 6.1 magnitude earthquake rocked Napa reminding California, home to the world’s leading data-reliant organizations, of Mother Nature’s destructive potential.

It was a stark warning of the damage that can be caused by forces outside our control, especially to the data centers we rely on. Currently there are over 500,000 critical facilities supporting our lives. The strain on them will only increase as the Internet of Things (IoT) drives the rapid proliferation of personal, community and corporate digital technology.

Something has to change. We must become more informed about the importance of data centers in our lives. The press has no trouble writing about them, but for all the wrong reasons. The majority of column inches focus on how dirty and polluting they are, when, in reality, no other industry is doing more to clean up its act through regulation and energy efficiency.

Today, every part of our lives is dependent on data centers. Our daily social media interactions, the transport infrastructure we depend on, the accessibility and management of personal and corporate finance, and the provision of our power and utilities. Everyone should understand how critical these facilities are to our daily activities.

Data Center Dependence

The financial sector is all too familiar with this challenge. IT issues and data center outages cost banks billions of dollars in lost revenue and fines every year. This impacts investor confidence and results in short-term sector instability but it is the public that has to live with the overall fallout.

Those on vacation are stranded without money, salaries cannot be paid and mortgage approvals falter, preventing thousands from moving home. Our world grinds to a halt, as explored in Intel’s own Food. Water. Datacenter. spoof video.

The truth is less amusing for those bearing the brunt of outages. HSBC, Bank of America, RBS and Natwest have all experienced recent IT failures at the expense of their customers and bottom lines.

Disruption like theirs highlights just how fragile the global economy is and how the most mundane activities are completely driven by data. 

Moving People By The Billions

big data cloudThe water and power distributed to your home is routed by data centers. Traffic lights, train signals, bus routes, even the live traffic information delivered to your car, are reliant on data center facilities.

Even Google with its ambitious automated car vision should be concerned. Its new driverless cars generate over 1GB a second through advanced image collection and processing technologies. Multiply this by how many cars Google expects to manufacture and the mountain of data becomes unfathomable. If the data center does go down, your car is nothing more than somewhere to stay dry while you wait for a bus!

The same pressures apply to airlines. Planes only earn money when in the air and if a carrier is already posting massive annual losses of $2.8 billion, mass delays only add to the financial strain. Qantas experienced this when it grounded 400 flights due to check-in failure. It ruined thousands of vacations because its data center did not have a contingency plan for the leap-second bug. 

Closer to home is your daily shopping routine. If Amazon has issues during the festive period it isn’t just your family’s Christmas that is ruined, but the company’s bottom line as well. Last year it took the company just 49 minutes of downtime to lose $4 million in sales, while a similar half-hour data center incident saw $65,000 hemorrhaged every minute. This is not a financially sustainable business strategy, nor does it win customer loyalty.

The New Age of Terror

Emerging technologies highlight how exposed the modern world is. In the age of terror, data centers have replaced power stations and airports as strategic targets. Deliberate outages already happen regularly. The whole of Syria was taken offline in 2013 when the government blocked external communications services. Iraq, North Korea and Sudan saw the same treatment at the hands of insurgents and their oppressive regimes.

Data is no longer a commodity but a priceless asset upholding national security. Even when intentions are not sinister, events out of our control are unavoidable. Level 3 Communications famously cursed squirrels in 2011 for chewing cables, a pest that was responsible for almost 20% of outages.  The undersea cables that carry data traffic between continents are even more susceptible; a regular target for hungry sharks and clumsy ships.

Most major governments and businesses are waking up to data security. The most famous example is Visa’s Operations Center East with an address ‘somewhere on the Eastern seaboard’. A genuine fortress with 130 former military personnel guarding its grounds, it has a moat and hydraulic bollards that can be raised to stop a car traveling at 50MPH.

Putting Our Lives In Digital Hands

The data center is now omnipresent and its significance in our personal and professional lives cannot be overstated. From a misplaced server holding financial data to the loss of Netflix on a Sunday afternoon, data centers are now symbiotic entities we cannot separate ourselves from.

They will only increase in importance as the last non-digital services are disrupted by technology. Gartner predicts more than $143 billion will be spent on data centers globally this year. Given their monumental importance, this seems like money well spent.

Free White Paper - Big Data, Big Security Risks

Data + Metrics + Training = Data Center Efficiency

 

This week RF Code is pleased to feature a guest blog by Dr. Magnus Herrlin, President of ANCIS Inc. Prior to establishing ANCIS, Dr. Herrlin was a Principal Scientist at Telcordia Technologies (Bellcore) where he optimized the network environment to minimize service downtime and operating costs. His expertise has generated numerous papers, reports, and standards, including thermal management, energy management, mechanical system design and operation, and network reliability.

You might ask yourself what data, metrics, and training may have in common in the data center domain. Data is everywhere and increasingly so. The goal of collecting data is to do something intelligent with them, at least eventually. Metrics are fantastic tools to compress a large amount of “raw” data into understandable, actionable numbers. However, data and metrics alone will not necessarily make us design and operate data centers in a more energy efficient way. The missing link is training of those involved in the design and day-to-day operation of the data center.

Data

We have the capabilities to collect a nearly endless amount of operational infrastructure data. Specifically, many Data Center Infrastructure Management (DCIM) offerings have that capability when linked to a sensor network. Computational Fluid Dynamics (CFD) modeling has a similar data-handling challenge. On the infrastructure side of the business, the data we are talking about are those related to energy and those related to environmental conditions in the equipment space. The data can be collected with wired or wireless technologies and linked to a DCIM software. Of utmost importance is that the collection and software technologies are flexible. Data centers are dynamic environments, which require frequent reconfigurations of the data collection system. Many of the commercial data collection systems are phenomenal data-generating machines.

But data alone is not sufficient for allowing operational efficiency. The growth of the DCIM market have not met the industry projections a few years back. One reason may be that the DCIM vendors have been less than successful in communicating the benefits of their systems. Another may be that the industry is young and poorly organized/standardized. But the lack of tools to convert raw data into actionable information, which the user actually can use to improve the operation of the data center, may also be a major contributor to this slow growth.

Metrics

Metrics are fantastic tools for compressing a large amount of data into useful numbers. A metric is typically calculated with a formula, generating an output that is simple to understand. Raw data often requires analysis and interpretation to make it useful. “Rich” data on the other hand can be used for predicting, planning, and decision making. And well selected metrics help produce rich data.

math chalkboardSince a metric generally is a single number, it can also easily be tracked over time. After all, how do you effectively track 200 or 2000 raw data points over time? Tracking performance is imperative because it shows progress (or lack thereof). And don’t underestimate the business value of this data: the guy in the corner office simply loves this type of information.

Maybe the most well-known metric in data centers is the Power Utilization Effectiveness (PUE). It is a measure of the energy (not power) premium to condition the equipment space. A PUE well above 1 means a large infrastructure overhead, whereas 1 would mean no overhead whatsoever. Clearly, to be able to calculate this single-number metric requires raw data as well as some data analysis (using a formula).

Energy efficiency needs to be balanced with equipment reliability, which generally has higher priority for data center owners and operators than energy efficiency. It’s a balancing act. An important part of equipment reliability is the air intake temperatures. The trick is to increase intake temperatures and thereby decrease energy costs without risking equipment reliability. In other words, energy and thermal management. Several organizations have developed guidelines for intake temperatures, for example, ASHRAE and NEBS.

Leading network services provider CenturyLink set out to find a way to cut the spending on energy and cooling. But, without appropriate monitoring and analysis, increased temperatures could lead to costly equipment failures. Based on a pilot project with the environmental monitoring and asset management company RF Code, CenturyLink is projected to save nearly three million annually at full implementation by balancing the temperature increase with equipment reliability.

There are plenty of intake temperatures in a data center. To be exact, everywhere there is a cooling fan on a piece of equipment. Even a fairly small data center has the capacity to produce data that becomes next to worthless without data management. One metric that was specifically designed to help with such data overload is the Rack Cooling Index (RCI). It is a metric for showing compliance with the ASHRAE and NEBS temperature guidelines. An RCI of 100% means perfect compliance. RF Code’s software used by CenturyLink automatically calculates this metric to help ensure thermal compliance.

Training

There are a number of training opportunities for the data center industry. I will limit myself to a training program called the Data Center Energy Practitioner (DCEP) program. It was developed by the Department of Energy’s (DOE) Advanced Manufacturing Office in partnership with industry stakeholders. This certificate training program for data center energy experts has now been re-initiated with help from the Federal Energy Management Program (FEMP), Lawrence Berkeley National Laboratory (LBNL), and three professional training organizations. A number of training dates are upcoming across the country, beginning on October 27, 2014 in New York. 

The main objective for the DCEP Program is to raise the standards of those involved in energy assessments of data centers. Training events, lasting one, two, or three days, prepare attendees with the significant knowledge and skills required to perform accurate energy assessments of HVAC, electrical, and IT systems in data centers, including the use of the DOE DcPro suite of energy-assessment software tools. The energy efficiency at day 1 will unquestionably decline over time if the staff does not understand how the energy and environmental systems are supposed to work. The more sophisticated systems, the more need for trained staff.

For more information about the DCEP training, please visit DOE’s Center of Expertise for Data Center Energy Efficiency in Data Centers. This website also maintains a list of over 300 recognized DCEPs, who are available to perform standardized energy assessments.

With flexible data collection, efficient data management, powerful metrics, and trained staff, we can actually do something useful with the data. Nice!

Get a closer look at how CenturyLink used intelligent sensor networks to drive data center efficiency and savings:  Watch this presentation by CenturyLink's Joel Stone and John Alaimo today!

New Call-to-action

Disaster Avoidance: Hope is Not a Sustainable Co-location Strategy

 

This was a lesson Digital Fortress quickly learned when a $60,000 air conditioning unit was leaking silently, spilling over 25 pounds of refrigerant across the data center. Its own systems showed 100% efficiency, the reality was the opposite. Failure was imminent. 

It is an all too common story for co-location companies. They default to ‘hope’ as a strategy, believing a facility will continue operating stably, rather than investing in solutions that prove it is to customers with cold-hard data.

Digital Fortress logoAs a co-location provider, your business is built on your reputation. Facility efficiency, thermal governance and ensuring availability keep your bottom line healthy. If a server fails, your business can fail. As Scott Gamble, Digital Fortress’ IT Manager says, “ten minutes can be the difference between things being OK and a serious problem.

Digital Fortress depends on this sensitive balance. Its customers expect it to provide adequate power and cooling as well as 100% visibility about co-located assets.

Shoring up the Fortress’ Defenses

Digital Fortress adapted its management model to ensure disaster situations like the above would never happen again. Customers had already begun requesting real-time data about facility conditions. They wanted to know that Digital Fortress’ environment could cope with current and future capacity.

With RF Code sensors, Digital Fortress was able to provide clear intelligence about current conditions. But the benefits go much further than just meeting customer requests. Digital Fortress can:

  • Assure customers that assets are safe, stable and located within an intelligently run facility
  • Pursue a strategy of disaster prevention and avoidance, rather than recovery
  • Drive efficiency improvements and cost savings across the entire business
  • Effectively manage capacity and associated environmental conditions
  • Eradicate temperature fluctuations and accurately detect leaks and other complications

Outcomes like these are only possible with a solution like ours, where advanced software capabilities combine with sensor instrumentation to provide a complete and live overview of the full environment.

Integration with other systems is critical for guaranteeing complete operational control - facilities management; traditional DCIM; power, water and cooling distribution - something Digital Fortress has already implemented – BMS; enterprise resource planning platforms -- all are necessary.

Digital and Financial Results

It took just 6 days for Digital Fortress to realize the benefits of RF Code. Gamble was able to single-handedly deploy the solution, creating positive results like:

  • Leak detection, insight that was acted upon instantly for disaster prevention
  • Eradication of an 11 degree temperature fluctuation
  • Holistic view of the overall environment for increased efficiency
  • Simplified cooling and energy configuration and management
  • A significant reduction in energy usage and maintenance costs

According to Gamble, most beneficial was C-level buy-in and financial sustainability: “[Senior management] didn’t want a giant bill all at once. They wanted to know that they could invest a little at a time over the course of the year. It helps with spending. It helps with the budget.

This financial clarity has resulted in Digital Fortress choosing to invest in our personnel tracking solutions in the near-future, which will give it both operational and physical security. “Security is a big concern for us and while all our vendors have been vetted, it is always good to have that kind of visibility into where people are at any given time,” concluded Gamble.

To learn more about Digital Fortress’ success with RF Code, read our case study. Its story was recently covered in the RFID Journal

Customer Case Study: Digital Fortress

Big Savings, Small Data: 451 Research Endorses RF Code’s Software-Driven Future

 

Enterprises are suffering from an intelligence crisis. They have been blinded by the mythical promises of Big Data. Regularly heralded by advocates as instrumental to corporate success, the relentless pursuit of new data sources and huge data pools have led many organizations to lose focus of more achievable strategic objectives.

Businesses already possess a massive amount of untapped wealth but it is sitting dormant in the data center, its opportunities unrecognized. Real business value can be generated from these resources, not by thinking as largely as possible.

Invest in your data collection methods, implement effective governance and management processes and focus of advanced, automated analytical capabilities.  That is thinking ‘big’ in the smart way.

Size Doesn’t Matter

DIKW pyramid451 Research highlights this in its latest report addressing RF Code’s future. It echoes our CEO that the data center has the potential to become the hub of all strategic decision making, but only if you bring your software and hardware capabilities together.

Analyst Rhonda Ascierto cites RF Code as an example, “RF Code’s data management and analytics layer acts like an enterprise service bus (ESB) for the data center, enabling bi-directional communications with other software, including DCIM, business process management and IT service and systems management."

Integration is critical. Isolated systems cannot provide value unless they have other sources to correlate their intelligence with. Enterprises are waking up to this concept, as is the DCIM industry.

Change begins with the data created by the facilities themselves. The DCIM market – which according to Gartner will reach $1billion this year - is ripe for major innovation and RF Code is leading the industry.

We are already responsible for helping the largest companies in the world manage the data produced by their facilities. We are now building on that legacy to leapfrog traditional DCIM.

Our unique instrumentation layer combined with our new software capabilities provides enterprises a complete platform to create the agile facilities they want and need. This is an advantage RF Code customer CenturyLink has already discovered.

CenturyLink: An $80 Million-a-year Example 

CenturyLink is already helping its customers execute their own data-driven business strategies, but managing a 55-strong-facility footprint comes at a heavy price: an annual energy bill of $80 million.

The company’s IT team knew its internal data held the answer. It recognized that a controlled raising of server intake temperatures could produce multi-million dollar savings but the team lacked the necessary instrumentation and software capabilities to achieve its objectives.

RF Code’s environmental sensors were the answer. They allowed CenturyLink to carefully monitor its temperature adjustments in real-time. Once the management team had proof its adjustments were sustainable, it integrated RF Code’s data with its own building management system (BMS) and automated the new airflow processes to continue slashing its expenditure.

This methodology is now being rolled out across CenturyLink’s entire data center footprint. Savings are projected to reach over $15million by 2017, $2.9million of which are from the first year with RF Code.

Those are big savings from small data. To learn more about RF Code’s software-driven future, read the latest 451 Research report.

Downlaod this 451 Research Report

All Posts