This was a lesson Digital Fortress quickly learned when a $60,000 air conditioning unit was leaking silently, spilling over 25 pounds of refrigerant across the data center. Its own systems showed 100% efficiency, the reality was the opposite. Failure was imminent.
It is an all too common story for co-location companies. They default to ‘hope’ as a strategy, believing a facility will continue operating stably, rather than investing in solutions that prove it is to customers with cold-hard data.
As a co-location provider, your business is built on your reputation. Facility efficiency, thermal governance and ensuring availability keep your bottom line healthy. If a server fails, your business can fail. As Scott Gamble, Digital Fortress’ IT Manager says, “ten minutes can be the difference between things being OK and a serious problem.”
Digital Fortress depends on this sensitive balance. Its customers expect it to provide adequate power and cooling as well as 100% visibility about co-located assets.
Shoring up the Fortress’ Defenses
Digital Fortress adapted its management model to ensure disaster situations like the above would never happen again. Customers had already begun requesting real-time data about facility conditions. They wanted to know that Digital Fortress’ environment could cope with current and future capacity.
With RF Code sensors, Digital Fortress was able to provide clear intelligence about current conditions. But the benefits go much further than just meeting customer requests. Digital Fortress can:
- Assure customers that assets are safe, stable and located within an intelligently run facility
- Pursue a strategy of disaster prevention and avoidance, rather than recovery
- Drive efficiency improvements and cost savings across the entire business
- Effectively manage capacity and associated environmental conditions
- Eradicate temperature fluctuations and accurately detect leaks and other complications
Outcomes like these are only possible with a solution like ours, where advanced software capabilities combine with sensor instrumentation to provide a complete and live overview of the full environment.
Integration with other systems is critical for guaranteeing complete operational control - facilities management; traditional DCIM; power, water and cooling distribution - something Digital Fortress has already implemented – BMS; enterprise resource planning platforms -- all are necessary.
Digital and Financial Results
It took just 6 days for Digital Fortress to realize the benefits of RF Code. Gamble was able to single-handedly deploy the solution, creating positive results like:
- Leak detection, insight that was acted upon instantly for disaster prevention
- Eradication of an 11 degree temperature fluctuation
- Holistic view of the overall environment for increased efficiency
- Simplified cooling and energy configuration and management
- A significant reduction in energy usage and maintenance costs
According to Gamble, most beneficial was C-level buy-in and financial sustainability: “[Senior management] didn’t want a giant bill all at once. They wanted to know that they could invest a little at a time over the course of the year. It helps with spending. It helps with the budget.”
This financial clarity has resulted in Digital Fortress choosing to invest in our personnel tracking solutions in the near-future, which will give it both operational and physical security. “Security is a big concern for us and while all our vendors have been vetted, it is always good to have that kind of visibility into where people are at any given time,” concluded Gamble.
To learn more about Digital Fortress’ success with RF Code, read our case study. Its story was recently covered in the RFID Journal.
Enterprises are suffering from an intelligence crisis. They have been blinded by the mythical promises of Big Data. Regularly heralded by advocates as instrumental to corporate success, the relentless pursuit of new data sources and huge data pools have led many organizations to lose focus of more achievable strategic objectives.
Businesses already possess a massive amount of untapped wealth but it is sitting dormant in the data center, its opportunities unrecognized. Real business value can be generated from these resources, not by thinking as largely as possible.
Invest in your data collection methods, implement effective governance and management processes and focus of advanced, automated analytical capabilities. That is thinking ‘big’ in the smart way.
Size Doesn’t Matter
451 Research highlights this in its latest report addressing RF Code’s future. It echoes our CEO that the data center has the potential to become the hub of all strategic decision making, but only if you bring your software and hardware capabilities together.
Analyst Rhonda Ascierto cites RF Code as an example, “RF Code’s data management and analytics layer acts like an enterprise service bus (ESB) for the data center, enabling bi-directional communications with other software, including DCIM, business process management and IT service and systems management."
Integration is critical. Isolated systems cannot provide value unless they have other sources to correlate their intelligence with. Enterprises are waking up to this concept, as is the DCIM industry.
Change begins with the data created by the facilities themselves. The DCIM market – which according to Gartner will reach $1billion this year - is ripe for major innovation and RF Code is leading the industry.
We are already responsible for helping the largest companies in the world manage the data produced by their facilities. We are now building on that legacy to leapfrog traditional DCIM.
Our unique instrumentation layer combined with our new software capabilities provides enterprises a complete platform to create the agile facilities they want and need. This is an advantage RF Code customer CenturyLink has already discovered.
CenturyLink: An $80 Million-a-year Example
CenturyLink is already helping its customers execute their own data-driven business strategies, but managing a 55-strong-facility footprint comes at a heavy price: an annual energy bill of $80 million.
The company’s IT team knew its internal data held the answer. It recognized that a controlled raising of server intake temperatures could produce multi-million dollar savings but the team lacked the necessary instrumentation and software capabilities to achieve its objectives.
RF Code’s environmental sensors were the answer. They allowed CenturyLink to carefully monitor its temperature adjustments in real-time. Once the management team had proof its adjustments were sustainable, it integrated RF Code’s data with its own building management system (BMS) and automated the new airflow processes to continue slashing its expenditure.
This methodology is now being rolled out across CenturyLink’s entire data center footprint. Savings are projected to reach over $15million by 2017, $2.9million of which are from the first year with RF Code.
Those are big savings from small data. To learn more about RF Code’s software-driven future, read the latest 451 Research report.
According to Synergy Research Group the retail co-lo market grew 8% in 2013. As enterprises continue their relentless move towards cloud IT, co-location space is being snapped up. This is an unsurprising figure.
The second largest provider of co-location and hosted IT space in the report is CenturyLink; the same CenturyLink that is now an RF Code customer.
With RF Code the company is saving millions of dollars in power and cooling.
CenturyLink’s objective was simple: drive energy efficiency and reduce its massive power bill. Prior to the RF Code deployment it was spending over $80 million a year on electricity.
Based on its current data center footprint (55 data centers in North America, Europe and Asia) and energy cost predictions, it identified annual savings of $2.9 million. Action was crucial for the company’s bottom line.
First, The Results
Joel Stone, CenturyLink’s Vice President of Global Data Center Operations and John Alaimo, CenturyLink Data Center Systems Engineer set a goal of raising server inlet temperatures to a more financially sustainable 75°F.
RF Code sensors were deployed at a 65,000 square foot pilot site to monitor thermal conditions as the team slowly and safely increased temperatures. Annual savings in the first year were $120,000, a return on investment within just 11 months.
Once they were sure their changes were sustainable, the company began implementing RF Code sensors across the rest of its global data center footprint, including 11 sites during 2014.
CenturyLink is forecasting savings of $1,205,000 for this year alone, with savings accruing each year thereafter. The full benefits include:
Projected annual savings of over $15,000,000 by 2017 if current deployment schedule is kept
Thousands of additional hours of free cooling through use of Economizer windows
RF Code data allows automated environmental management and monitoring across facilities
When necessary, full integration with BMS for greater control of thermal conditions
But it isn’t just CenturyLink that benefits, as echoed by Alaimo, “The benefits are not only cost related. We can now demonstrate we are meeting our SLAs because we have so much more data within our facilities. We are able to mitigate risks and have greater visibility of the environment.”
General Co-Lo Trends
As the market matures, co-location customers are becoming more demanding. Providers are now more accountable for their facilities than ever before. There is no hiding behind the cloud.
Our Digital Fortress case study demonstrated this. Like CenturyLink, the company implemented real-time monitoring capabilities not only for its own operational sake, but for its customers. It recognized that the benefits of doing so far outweighed the act of doing nothing.
Deploying a system that delivers 100% visibility of the data center environment generates hard statistical evidence that service level agreements are being upheld. Customers no longer have to trust a provider’s word; they have insurmountable proof, as does senior management and the financial department.
Customers can see in real-time that their business-critical IT is running efficiently and safely. Any financial savings generated can be returned to the customer through billing reductions or other commercial benefits.
Read about the CenturyLink deployment in more depth in our new case study here.
Our previous discussion around wearable technology and its potential impact on the data center raised some important questions around always-on sensor networks, the Internet of Things (IoT) and intelligent infrastructure.
The Internet of Things is not a passing fad. USA Today recently highlighted its importance and the growing number of applications for sensor networks. They range from the every day mundane to city-scale efficiency improvements.
Sensors are everywhere. From the clever Tile sensor that keeps everyday items in check, to the health-monitoring band on your wrist; from GE Healthcare tracking life-critical equipment and patients, to China using sensor technology to more intelligently manage urbanization.
Sensors are helping prevent suicide in prisons, they are making our rivers cleaner and they are making our buildings more environmentally friendly, including the world’s data centers.
From a corporate perspective, they are driving greater operational efficiency, allowing enterprises to save massive amounts of money and helping improve the quality of service delivered to customers, as our recent case study for one of Hong Kong’s leading power companies demonstrates.
Coming Together Under One Intelligent City
The numbers around sensors are staggering. VisionGain puts the wearable sector at $5 billion, the broader IoT landscape is nearer $1.5 trillion (MarketsandMarkets) and sensor-supported smart cities are at a similar level of investment (Frost and Sullivan).
This is not surprising. The world’s population passed 7 billion in 2012. A billion more people are expected to be alive in 2025, 60% of which are expected to be living in urban environments; by 2050 70%.
As urbanization continues and communities become increasingly reliant on the benefits sensor networks bring, investment will only surge one way - upwards. This isn’t a future concern but one that needs addressing now. The world today needs greater efficiency in how its transport networks, logistics infrastructure, public services and healthcare are run.
The good news is that Big Data and sensors are changing lives for the better already. Santander’s deployment of 20,000 sensors in Spain to monitor and reduce air pollution levels is helping improve citizens’ quality of life.
Helsinki’s government has used Big Data to overhaul the efficiency of its public bus network, saving 5% in fuel costs through more intelligent routing and bus maintenance – a percentage that yields vast commercial benefits considering the number of vehicles in the fleet. This money can then be reinvested to further improve services.
France is experimenting in reducing energy usage in homes through intelligent monitoring. Ecuador is investing $1 billion in a high-tech city that will adapt to residents’ needs.
There are many other examples. The cities we live in, our interaction with the world and the data infrastructure that supports everything is shifting at a seismic rate. Sensors are here to stay and will only become more critical as urbanization continues.
Big Data applications and any improvements do however rely on one key component: the world’s data centers running flawlessly. If uptime drops, customer opinion will suffer, and as we become more reliant on data-driven technology, people’s lives could be put under threat.
The data center of the future must be secure, commercially aligned, thermally efficient, optimized, compliant with international data, energy and carbon regulations and support the businesses, services and people that rely on it entirely.
To learn how one core public service provider did just that, read our latest case study here.
It places number four on IMD’s annual world competitiveness report, has a population of over 7 million, is the most high-rise dense city globally and defines APAC commerciality across the world.
The continued global recovery is dependent on Hong Kong. It controls the global-recession busting growth figures evident in the region. Its sustainability as a financial world player relies entirely on its data centers. They are its lifeblood, but also one of its most significant challenges.
The power demands of a world center of finance are massively expensive. Efficient power generation and distribution against the backdrop of rising energy costs has put Hong Kong and mainland China’s energy corporations in the spotlight.
Globally people are consuming more energy than ever before. The US Energy Information Administration expects global use to rise by an astronomical 56% by the year 2040. That’s the equivalent of 4,000 power plants.
Identifying efficiencies can transform the bottom line of an energy company instantly and significantly. Inefficient systems and processes already add millions of dollars of needless expense to budgets. This cannot continue.
As power becomes scarcer, more expensive and transforms into a near-priceless commodity, citizens will expect governments to compete on a macro level and accommodate their energy requirements. Energy sustainability will quickly be an issue affecting everyone.
The Stirred Dragon
These challenges are already evident in Hong Kong and China. The dramatic growth of power use within Hong Kong is a direct result of emergent data technologies. Businesses and the consumer landscape have pushed the city, known for its technological leadership, towards an always-on culture severely reliant on power and data infrastructure. The 24/7 financial demands of the city place further pressure on its energy companies.
Data center growth in Asia as a whole is exploding as more than half the world’s population (estimated at 4.4 billion) catches up with the West with technology, digital culture and general infrastructure improvements.
Equinix’s capital expenditure in Hong Kong reached $150 million this month with the expansion of its latest Tsuen Wan facility. SoftLayer, IBM’s infrastructure arm, pledged $1.2 billion for facilities in key financial centers, including Hong Kong. China Unicom has invested $3 billion into an integrated telecommunications facility in the region to support its soaring user base.
Analyst figures mirror this. Frost & Sullivan values the Hong Kong data center market at $802 million by 2019. These are huge numbers that need to be paid attention to.
Power Driving Power
With statistics like this, it is crucial that energy companies have resilient, optimized infrastructures that support their complex operational requirements.
Scaling power in real-time and retaining enough capacity to ensure services, is not just mission-critical for Hong Kong’s energy companies, but critical on a global scale.
Behind the intelligent generation of power is IT and the data centers of power companies. As energy distributors look to enhance and drive greater efficiency across their operations, they require agile IT and data center infrastructure to support their objectives.
Efficient urban energy management is mirrored by efficient energy control within the data center.
Implementing visibility measures and identifying immediate thermal and power-related concerns – as well as predicting future trends - allows management to maximize budgets and guarantee operational redundancy. Disaster avoidance is essential in the context of power distribution. Power cannot fail.
Costs Against Distribution
The age of cheap power in Asia is ending. Shanghai Grid announced a 10% increase across its network last year. PetroChina is raising its oil prices drastically. Power companies, if they are to remain competitive for consumers, have to deliver commercial improvements elsewhere in their operations. The data center offers the perfect opportunity.
As Zhu Hua, Tencent’s Senior Data Center Architect and member of the China Data Center Committee highlighted recently, the vast cost savings possible are obvious. “For instance, for a data center with 100,000 servers, if electricity prices drop by a PUE of 0.1, it will help the data center save 12m CYN (almost $2 million) a year.”
One of Hong Kong’s leading power companies recognized similar potential efficiency improvements in its own data center and with the help of RF Code optimized its operations to cut its current power requirements.
This involved bringing it in line with globally recognized ASHRAE guidelines, an environmental monitoring solution that took a single day to deploy and the rectification of thermal complications that would have, if left unaddressed, caused the facility to fail.
Read about how the company achieved this in our new case study and follow RF Code on Twitter and LinkedIn for updates on the Asian data center market and rising global energy crisis.
The first generation of Google Wear technology was released last week, bringing with it a new era of interaction, connectedness and in the long run, potential strain on the world’s data centers.
As smartphones have become ubiquitous and lost their excitement factor, phone manufacturers are searching for new ways to increase their sales bases. As consumers claw for a simpler, more natural user experience, wearable technology presents one of the largest opportunities.
It is clear that strapping technology to our bodies is more than a fad, a major technology market with the power to change how we interact with the world.
Wearing With Pride
Last year’s Juniper Research report estimated that by 2018, $19B will be spent on devices that interact with our phones. This estimation is certainly mirrored by Google’s desire to integrate Android into everything; Google Wear, Google Glass, NEST.
Early devices like Pebble have shown there is a desire for wearable technology, but the real growth at this early stage – and threat to the data center – looks to be through specialist wearable technology which answers specific questions on people’s wellbeing.
Health sensors have exploded in popularity the last 18 months. There is no indication that demand will slow. Mirroring the above Juniper figures, a similar report by MarketsandMarkets put the consumer healthcare sensor at nearly $50B by 2020.
Products like the Fitbit activity tracker and Jawbone Up definitely suggest early market estimations are on-trend. Over 3 million Fitbits were sold in the US last year. Jawbone acquired BodyMedia last year for $100 million. These are big numbers for a fledgling market.
Connecting Your Body
As the technology itself continues to improve and the stigma of wearing technology subsides, growth will continue, and with it, the continuous flow of data being sent to data centers.
Product advancements have already increased the volume of data being generated by wearable devices. Early-stage products used to require manual syncing, but now everything is constant and automated. Data from millions of sensors will keep flooding into data centers in increasing amounts.
The development of Bluetooth Low Energy standards will only add to the vast amount of data piling into the business-critical facilities of the world’s largest data-driven companies.
The list of Bluetooth Smart Ready products (which include scales, proximity sensors, heart rate monitors, breathalyzers, blood pressure monitors, pedometers and other metrics) is growing daily as consumers become more inclined to monitor their health analytically. Data is no longer something confined to IT, but a core part of how many of us choose to live our lives.
Behind The Scenes
Advancements in the battery technology that supports mobility will further open up a world where people no longer reach for their pockets, but sensors reach out to owners.
This expectation of an always-on, ever-connected life brings with it a specific set of data-driven financial and IT challenges. Wearable technology will become a critical issue for those managing the infrastructure powering this new world.
The Data Center Journal already noted six months ago that a wearable reality has already arrived. Worryingly for companies moving into the market, its growth has the capability to damage data center sustainability, reliability and service levels.
Organisations are only just beginning to grasp the data challenges led by mobility, smart devices and mobility, so the prospect of another two disruptive markets on the horizon - wearable technology and the Internet of Things – poses significant risks to those behind the commercial-dependent data infrastructure that supports everything.
In a future blog on the topic, we’ll address what specific IT and financial challenges accompany wearable technology and the methods for overcoming them with an automated, intelligent data center.
Do you think that the data center from which Target’s stolen consumer data was compromised has been a lot like a scene out of RF Code Theater Production’s “Supernatural Info-tivity” lately? With 110 million victims in the November data breach (70M higher than the originally reported 40M), the Target data breach ushered in the age of the Big Data Theft to public consciousness and resulted in major losses for the company. Follow-up reports that Michaels and Neiman Marcus suffered similar attacks only further echoed that your data is everywhere and each time one of us swipes a payment card, we could potentially be sending our most vital details straight to a scary situation.
This month, details have been released that parallel the breach even closer with the typical horror movie. Just as we’ve seen in thousands of B-level scary movies when teenagers are told not to drive down that road or a girl is warned not to open that box, it was revealed that Target was warned. Two months prior to the huge data breach, IT security staff warned Target of potential vulnerabilities within their card payment system, and we wouldn’t be watching a horror movie if Target had heeded their advice.
True, there may not have been any gruesome deaths at the hands of a psychopath or demonic possession. Instead, the frights occurred when Target purchased credit protection plans for all 110 million victims, were struck with multiple law suits from financial institutions, stock prices fell to a 20-month low, and ultimately reported lower-than-expected 2013 Q4 results. With analysts predicting that the entire ordeal may cost the big box retailer more than $1 billion in fees, many on Wall Street and in Target’s boardrooms would describe the incident as…well… a bloodbath.
The Target situation goes far beyond strictly data center security, as malware was even installed on point-of-sale registers. These registers are all connected to remote management software and “protected” with what are now being reported as weak passwords. The industry’s defense has been to rally for more information about cyber threats, through the Retail Industry Leader Association’s Cybersecurity and Data Privacy Initiative and to encourage chip-and-PIN cards to replace the existing technology. Already the standard in Europe, these cards are supposedly more secure because they are difficult to counterfeit and provide encrypted information on-site. However, it’s worth noting that the switch to chip-and-PIN cards would be enormous to financial institutions (some already carrying the costly burden of compromised consumer information following the Target data breach) and that Target’s data thieves successfully stole encrypted PIN information.
With the potential for even more damaging breaches occurring as the role of data in our daily lives increases, RILA’s initiative as well as the exploration of more secure card technology is definitely a step in the right direction, but I can’t help to wish that more attention was being called to actual data center infrastructure management improvements. After all, warnings can be ignored, weapons and defenses can be used against you, and it’s usually the unlocked door or the flat tire that lets the bad guy sneak in the back door.
While deploying an automated asset management solution from RF Code can be an incredibly beneficial way to simplify audits, guarantee audit accuracy, ensure compliance and lower operational costs in your data center, there is obviously significant up-front cost associated with rolling out the asset tags, readers, and software necessary to generate and collate all of the asset location and lifecycle data. While RF Code customers like IBM and CME Group have reported rapid return on their asset management solution rollouts (both reporting ROI’s of less than 12 months, with ongoing savings thereafter), it’s just good fiscal sense to take a hard look at any potential ITAM solution and see where the biggest cost drivers are in deploying the solution, and what can be done to maximize both the rapid return on the initial deployment costs as well as prolonging the lifespan of the deployed solution without incurring additional costs.
In deploying a solution from RF Code, typically the total cost of the asset tags is the largest part of the total purchase cost. After all, each RF Code reader can read beacon data from thousands of RF Code asset tags and sensors within range (a single RF Code reader covers around 2500-5000 square feet depending on the characteristics of your data center or facility, more in open-air environments). So the total number of readers deployed in any given scenario will typically be fairly low, especially when compared with the aggregate cost of affixing our active RFID asset tags to each of your enterprise assets. Therefore, in order to achieve the best possible value it’s imperative that these asset tags perform reliably to deliver up front savings as quickly as possible and that they continue to deliver savings over the entire lifespan of the assets to which they’re affixed.
Here at RF Code, we took a lot of these concerns into account when we designed our asset management solutions. To ensure savings over the entire lifecycle of an enterprise asset (typically 3-5 years) we designed our asset tags to have a 5+ year lifespan, allowing an asset tag to be affixed to an asset as soon as it is purchased and deployed and ensuring that the asset is reliably tracked and monitored right up until it is retired at end-of-life. And the industrial-grade adhesive we use for affixing tags to assets made it nearly impossible for tags to accidentally be knocked loose or removed from assets, ensuring that they deliver value across the entire lifecycle of both the tag and the asset itself.
But in talking with our customers, some concerns came to light:
Because the asset tag is more-or-less permanently affixed to the asset, what if I need to retire an asset earlier than expected? Is there any way I can re-use the asset tag rather than just throwing it away?
What about malfeasance? If you’re actually reading signals sent by the asset tags, can’t someone just pull the tag off and then steal the asset without anyone knowing there’s a problem?
Our M174 asset tags directly addresses these issues with our replaceable installation tabs. These inexpensive installation tabs -– available in form factors designed for simple flag, loop, or thumb-screw installations to ensure they can be used on a broad variety of equipment – can be easily removed and replaced from the M174 tag housing itself. Therefore, if you need to retire an asset, but the asset tag affixed to it is still functional, you can simply cut the installation tab, replace it with a new one, and the re-deploy the asset tag for use with another asset. In this way you are guaranteed that each asset tag you purchase will be used for its entire lifespan, delivering savings and asset visibility all the while.
But what about ensuring asset security? Can’t people just cut these tabs or yank the asset tag off an asset and run? That’s where our new tamper tabs, also designed exclusively for use with the M174 asset tag, come in. RF Code tamper tabs – also available in flag, loop, and thumbscrew designs -- include a carbon fiber filament that is embedded in the adhesive of the tab. Once the tamper tab is connected to an M174 tag, the carbon fiber filament completes a tamper detection circuit. If this circuit is subsequently broken the tag will immediately begin broadcasting a tamper alert status. Once the tamper tab is applied to an asset the carbon fiber will be torn if the tag is cut, if the tag adhesive is peeled away from the asset or the tab itself, of it the M174 tag is pulled off the tamper tab by force. And because it takes over 8 pounds of force to pull the tag off of the tab, normal movement of assets will not break the circuit or trigger tamper alerts.
Want to learn more about RF Code’s new M174 asset tag? Read technical information about the tag here, or watch Chris Gaskins discuss the M174 tag and our new tamper tabs. And if you would like to discuss the ways in which these innovative solutions ensure efficiency and contribute to the bootom line comment below or contact us: we’d love to hear from you.
Guest Blog by William Bloomstein, Market Strategist for iTRACS
Data Center Infrastructure Management (DCIM) means many things to many people, depending on their short-term and long-range goals for the data center. For some, the immediate priority is reducing energy consumption and costs, contributing to the elimination of OPEX. For others, it's getting a better handle on space so they can optimize every square foot of their existing footprint, an important part of their strategy for reducing CAPEX. For still others, it's getting control over their IT asset inventory – what they have and where it is on the floor. Here, data center owners and operators around the world agree on a core tenant of IT asset management:
Any DCIM solution worth its salt has to have a strong asset tracking and location capability that can find, identify, and confirm the location of every asset on the floor.
This is where the DCIM partnership of iTRACS and RF Code shines.
You can't manage what you can't find.
It may not be as bad as worrying about the whereabouts of a teenager at night, but tracking down your servers and other IT assets is vital for data centers seeking firm control over their asset portfolio. It reduces time spent on asset management by your IT staff, accelerates audits, helps with lease management, and streamlines tech refresh and other data center initiatives.
So let me ask you this:
Do you know where all of your servers are located, what they are doing, and why they are there? And HOW do you know that you, indeed, know?
If you can't answer both of these questions with confidence, you need a joint DCIM solution from iTRACS and RF Code. Let me explain ...
There are two types of assets on the floor that commonly go undetected and/or unauthorized, jeopardizing the efficiency and well-being of your operation:
- Misplaced Assets. These are older "forgotten" assets sitting in your racks without your knowledge or control (out of your line of sight). You don't even know they are still hanging around. Perhaps the Line of Business removed its applications and left the physical servers to sit there uselessly, forgotten. Or the boxes' leases expired but no one did anything about it. In any event, these assets are the "lost children" of the IT portfolio. And if you don't asset-tag them, they may sit on your racks, unseen, needlessly consuming power and space, forever.
- Misinstalled Assets. These are assets that are installed in the wrong location but you may not know it until it's too late and your IT services are negatively impacted! This often occurs in critical facilities where servers must be deployed or relocated fast, outside the normal working practices. In situations like this, assets can be inadvertantly misinstalled in the wrong racks. But there's no way for you to KNOW that a mistake has been made unless you have an asset tracking solution like iTRACS with RF Code. Without asset tracking, you may remain in the dark about these assets until it's too late and the complaints about poor or nonexistent service roll in. Not a pleasant place to find yourself.
Unauthorized assets can wreak havoc on your floor ...
These problem assets can:
- Consume power without delivering quantifiable value back to the Business – under-utilized or literally unused
- Take up valuable rack space for no purpose
- Draw in network services for no purpose
- Delay time-to-service (misinstalled servers), costing the Business in lost transactions, customer goodwill, revenue, and profitability
- Potentially jeopardize availability (misinstalled assets), creating technical issues that can reverberate across the physical ecosystem
- Waste money - both OPEX and CAPEX
Fortunately, there is an answer – DCIM featuring the iTRACS Converged Physical Infrastructure Management® (CPIM®) software suite with RF Code's real-time asset data inside.
Use DCIM to run a more efficient data center operation with tighter reins over your asset portfolio
Working together in a powerful DCIM partnership, iTRACS and RF Code can help make your rogue assets go away. Integration with RF Code lets iTRACS CPIM® users collect, manage, and analyze asset location information captured directly from RF Code's RFID sensors. RF Code's real-time asset location data – analyzed and visualized in iTRACS' award-winning DCIM management platform – lets users confirm the location of every asset in their portfolio, manage physical moves/adds/changes with 100% confidence, and conduct other asset management tasks over the lifecycle of their equipment. The sensors cannot lie. So rogue assets can no longer hide.
iTRACS and RF Code - the dream team in DCIM
Scenario: You need to add a bunch of new servers as fast as possible and you'd better do the install right the first time, since there's no margin for error when revenue is at stake. You need to make sure your technicians physically install the right servers in the right racks. The last thing you need are misplaced servers disrupting the environment and delaying services to the business. Watch this brief demo to see how iTRACS and RF Code are collaborating to help ensure that your assets are always where they're supposed to be.
Last week I had the good fortune to attend a keynote presentation by Nate Silver. Silver’s presentation, entitled “The Signal and the Noise: Why So Many Predictions Fail, and Some Don’t” (not coincidentally this is also the title of his new book), addressed the availability of big data and how it affects decision-making. Being a big time data wonk myself, I was pretty excited.
Yes, yes: I probably need to get out more.
Anyhow, one of Silver’s main points was that when we talk about “big data” we need to think of it as a mass of independent data points, and that without accurate correlation between multiple data points it can be worthless, or even destructive. In his opinion it’s better to think the value of all this information by focusing not on big data, but on rich data. What’s the difference? While big data refers to the massive, undifferentiated deluge of data points, rich data instead refers to data that has been processed and refined, eliminating the extraneous and leaving only the meaningful information that can then be used for predicting, planning, and decision making.
This nicely parallels the need for accurate, reliable, historic data to ensure efficient asset management and environmental monitoring in the data center. In many cases the professionals that are responsible for monitoring and managing the data center are doing it with almost no data at all, let alone without rich data. Often, the location of servers is only known based on a single data point gathered during the occasional inventory audit process, or the level of cooling required for the entire data center is set using a few sparsely distributed thermostats. It’s very unusual for a data center manager to have the truly reliable rich data at hand that they need to ensure operational efficiency.
So where would big data center data be refined into rich data center data? Clearly, this would occur in a sophisticated back end system (asset management database, a DCIM platform, building/facilities management system, etc.). Without reliable systems in place that enable users to distill the vast quantities of undifferentiated big data flowing in from their various tag and sensors into rich data, the data center professional is left with the Herculean task of sifting through mountains of data points manually, searching for the signal in the noise. Given the complexity and dynamism of the data center environment, undertaking this effort without reliable automated assistance seems unlikely to yield much in the way of results.
But no matter how good the system, it all starts with data. As Silver explained, to obtain rich data you must have quantity, quality and variety. In other words:
Quantity: you have to have enough data at hand to be able to discern patterns and trends
Quality: the data must be as accurate as possible
Variety: the data should be gathered from multiple sources in order to eliminate bias
So what would a data center need to generate truly rich data? First and foremost, it's obvious that this data must be generated automatically: the effort that would be required to manually collect enough data about asset locations and environmental conditions and ensuring that it is both accurate and reasonably up to date would be overwhelming for all but the smallest of data centers. So manually driven data collection processes (clip boards, bar codes, passive RFID tags, and so on) just won't cut it.
Clearly, what's needed is a combination of hardware that will automatically generate and deliver the needed data (active RFID anyone?) and software that helps the user to correlate the vast amount of received “big data” into rich data that can be used to perceive patterns and trends … and ultimately to make informed decisions.
For environmental monitoring applications such as managing power and cooling costs, identifying hot spots, ensuring appropriate air flow, generating thermal maps and so forth this requires an array of sensors that generate a continuous flow of accurate environmental readings -- temperature, humidity, air flow/pressure, power usage, leak detection, and so on.
For asset management applications such as physical capacity planning, inventory management, asset security, lifecycle management and so on, this requires tags that automatically report the physical location of individual assets as well as any change in these locations without manual interaction.
The data generated by these devices would then be collected and refined in the backend system, resulting in meaningful, actionable information. But the key point is that ultimately it’s the data that is the key component here. Without a continuous flow of accurate, reliable data about your data center assets and the environment that surrounds them – data that meets all of Silver’s requirements of quantity, quality, and variety – even the best DCIM platform will be of only limited value.