Uptimers! Twitterers! Jump on the #DCIM Bandwagon!
We’re hanging out at Uptime Symposium in Santa Clara, CA, and enjoying our time chatting with attendees and checking out cool technologies. We’re excited that 451 has gathered together its top analysts and big-name IT vendors, including our partners, IBM and CA Technologies. Our own VP of Marketing and Strategic Partnerships, Richard Jenkins, will be speaking with IBM’s Brent Kiger on how IBM has seen ROI with the deployment of our asset management and environmental monitoring solutions this Wednesday at 12pm.
The hot topic at Uptime 2013 that’s finding its way into seemingly every presentation and conversation is Data Center Infrastructure Management, or DCIM. DCIM optimizes datacenters by implementing operational efficiency, ensuring compliant audits, and meeting immediate capacity needs. The demand for DCIM tools is growing rapidly as both the Big Data and Mobile markets expand. Datacenters are producing more data and consuming more power than ever before, which is generating the latest buzz that DCIM is no longer “nice to have,” but now “essential for the control and automation of big data.” As a top DCIM vendor, naturally we are thrilled to be the hot commodity at Uptime.
We’re also thrilled to see Uptime going mobile with their Smartphone App and social with their #Uptime13 hashtag. We’ve really enjoyed reading tweets from vendors and attendees, particularly @CA_DCIM, @ioDatacenters, @rasciert, and @AndyLawrence451. Since #Uptime13 and #DCIM are trending on Twitter, we’ve decided to share some of the DCIM stats we’ve learned from 451 and Uptime, along with a few RF Code stats from 2013. They’re all 140 characters or less and ready for you to tweet to your followers with the #Uptime13 and #DCIM hashtag. For more information on DCIM and RF Code, stop by booth #416 and chat with our team! If you’re not at Uptime, contact us to learn more about DCIM!
1. The DCIM market generated about $429m in revenue in 2012. [TWEET THIS STAT] (451)
2. At the moment, no datacenter exists at a self-optimizing, autonomic level. [TWEET THIS STAT] (451)
3. The first quarter of 2013 was a record period of customer orders for DCIM software. [TWEET THIS STAT] (451)
4. For every 1 degree data center temperature is cooled, 2% of annual power costs are saved. [TWEET THIS STAT] (RF Code)
5. For every outage, IT equipment failure costs on average $750,326. [TWEET THIS STAT] (Emerson)
6. Asset management and environmental monitoring combined make up 80% of the DCIM market. [TWEET THIS STAT] (451)
7. DCIM revenue is projected to grow 69% this year alone. [TWEET THIS STAT] (451)
8. DCIM sales are expected to grow at 42% to reach $1.8bn in aggregate revenue by 2016. [TWEET THIS STAT] (451)
9. 86% of companies experienced one or more instances of system downtime last year. [TWEET THIS STAT] (Datacenter Knowledge)
10. 58% of DCIM revenue comes from the US, while 22% comes from Europe and 14% comes from Asia. [TWEET THIS STAT] (451)
11. Only 13 of DCIM vendors account for 75% of total DCIM revenue. [TWEET THIS STAT] (451)
12. Top DCIM drivers are total IT under management, Cloud Computing, Awareness, and Improved Products. [TWEET THIS STAT] (451)
13. DCIM pricing is generally regarded as flexible and not fixed. [TWEET THIS STAT] (451)
14. RF Code has shown between 99.7%-100% accuracy in asset tracking. [TWEET THIS STAT] (RF Code)
”Big Data” refers to the unmanageable amount of data being driven by a connected world. It is becoming a challenge that companies need to address – quickly – in order to not just stay competitive, but to remain as a valid option in their sector.
However, with big data comes Big Infrastructure. Scalability of the essential facilities, technology and energy required to process data is as important, if not more important, as the data itself. Without a managed and scalable infrastructure, there is no Big Data to cause a challenge!
It is not just the volume of data which makes it ‘big’. It is all about the correlation of data and the necessary digital insight it gives. This understandably presents a unique challenge to all enterprises. How do you collect, store, analyze and protect big data effectively, but at the same time plan for its continual growth?
These tips are in three parts; the infrastructure required to manage the data, the analysis of the data and the strategic reasons for doing so.
1. Security. Ensure the infrastructure housing the data is secure. It’s tough to analyse data when it is lost. Data security is one of the largest issues for regulated industries and one of the biggest concerns for customers and executives alike.
2. Performance. Continual assessment of the environmental factors that affect the data centre is essential. Heat, humidity, liquid, unmanaged thermal fluctuations all lead to system failure and downtime. Monitor these factors, correlate and automate preventative actions and use historical data to plan for future additional capacity requirements.
3. Inventory/Asset Management. Data centres are continually mobile due to environmental conditions, fluctuating workload, maintenance, failures, depreciation, and many other factors. Like the security reasons, not meeting audit requirements ends up in fines and regulatory implications. Implement an efficient, automated asset tracking and management system.
4. Integrate. Don’t buy “best of breed” at the expense of integrated solutions. It is important that data is able to talk across multiple platforms providing information at consolidation points to enable action to be taken – either systematically or physically.
5. BI & “Big Data” Analytics. Probably the most important aspect of managing big data is the ability to capture the right data at the right times then analyse that data in real-time. This is not an inexpensive investment, however. BI applications have evolved over the years and are now the domain of the large data warehousing and management applications, which makes their integration and configuration simpler…but not simple.
6. Adapt. Continual process adaption is essential to the agile business. Too often processes become static, leading to an inability to react. Ensure the right data is going to, and being analysed by, the correct personnel or systems to ultimately roll up to a management information dashboard.
7. Plan your data center strategy. Large companies are now looking 0-5 years out to ensure they have enough infrastructures in place to manage the short-term implications of Big Data. Not doing so presents massive cost and scalability implications which could slow, or even stop, the growth of an organisation.
8. Manage Energy. Energy consumption and the environmental effects of data centres is becoming a leading reason for adaptation in the IT sector. Big data requires “Big Power” while governments and society are demanding strict governance around the resources consumed by data centres. Sustainability is key to the power of big data and the ability to use it to your advantage.
9. You can’t manage what you can’t find. Track and monitor assets to ensure data stays secure and available. Asset security is one of the largest infractions as regulations tighten around the security of consumer and corporate data.
10. Focus. Don’t get sucked into the hype around Big Data. Data growth is unprecedented and presents challenges on many levels, but stay focused on the strategy or your organisation and look for the data that most favorably supports that.
Take a closer look at the challenges of managing big data -- and the big infrastructure on which it resides. Download our whitepaper "Big Data. Bigger Security Risks" today.
RF Code technology, featured prominently in GE Healthcare's Brilliant Machines campaign, generates the real-time data that powers these innovative solutions. Our asset and patient tracking solutions are helping GE transform the patient experience. Take a closer look at how our real-time asset location and wire-free environmental monitoring data helps to reduce wait times and increase hospital staff efficiency.
Agent of Good
The Matrix's Agent Smith takes a look at how GE Healthcare is helping hospitals build connected operations and reduce patient wait times.
Improving the Patient Experience
Take a closer look at how GE's Aventura Hospital deployment helps nurses and doctors track patients and locate equipment, significantly reducing the amount of time that an individual waits for treatment, and improving the overall patience experience.
Time can be the most valuable resource in a hospital. This feature shows how GE Healthcare's AgileTrac patient tracking solution -- powered by real-time location data from RF Code -- delivers real-time data to reduce the time patients spend in waiting rooms, as well as the time that doctors and nurses spend searching for equipment.
Summerville Healthy Hands
A system for continuous monitoring of hand-washing, using data generated by RF Code technology, helps a South Carolina hospital monitor compliance with its "wash in, wash out" policy for patient rooms.
RF Code and GE in the News
See how RF Code's proximity sensing badge tags are being deployed by GE Healthcare to help ensure hand washing policy compliance and reduce infection rates in healthcare facilities.
In April 2012, Canada’s Toronto-Dominion Bank, disclosed a security problem of its own making — the loss of 260,000 customers’ data on two server back-up tapes that went missing.
In February 2012, Emory Healthcare lost ten computer disks containing encrypted personal information on over 300,000 patients.
In 2011, back up tapes containing military and beneficiary medical data was stolen from the Department of Defense, resulting in a $4.9 billion class action lawsuit.
In 2009, BlueCross BlueShield of Tennessee paid $1.5 million settlement and spent $17 in corrective action after losing 57 hard drives containing data on more than one million customers due to a burglary.
Internal Security Threat on the Rise
These are just a few of the hundreds, even thousands, of data security breaches caused, not by external forces, but by companies’ own internal negligence — including misplaced, lost, and stolen devices on which the data is stored.
Companies are spending billions of dollars to fight the threat of cybercrime by deploying processes to keep hackers out of their systems and away from their data. Meanwhile, many of today’s most damaging security breaches occur from within.
This internal security threat is every bit as serious, if not as sensational, as external threats — and comes with the same high cost regulatory penalties, lawsuits, and PR nightmares.
Device Tracking Technology Boosts Security
In today’s world of sophisticated technology, the public wonders: How can a company misplace a critical device containing customers’ private data?
Worse, the explosion of big data in today’s corporations is only going to make the issue of securing data internally even more challenging — and critical.
A new white paper from RF Code overviews the threats of internal data security and offers a solution that gives companies 100 percent tracking control over every device — dedicated asset tracking networks. Download Big Data. Bigger Security Risks. How Data Centers Can Track, Manage, and Secure Data With Dedicated Asset Tracking Networks today.
The Six Greatest Risks Facing IT Asset Inventory and Management — and the Single Automated Solution
From procurement, to maintenance, to retirement, the lifecycle of a single piece of IT hardware introduces uncountable opportunities for asset-tracking vulnerability. Multiplied by the thousands of devices across a company’s network expands these risks exponentially.
In their search to find the best way to track and manage computing assets, companies are finding a solution that works: automated, wire-free IT asset management systems. These systems deliver real-time asset tracking, automated alerts and updates, immediate reporting, global monitoring, easy application integration, and lifecycle management.
Here is just one example of the many risks facing today’s expanding IT networks — and the one automated and cost-effective solution that mitigates them all.
Challenge: Reduced Asset Visibility
Servers and networked storage devices spend most of their useful lives mounted in racks. These locations house the greatest collection of sensitive corporate data, and are likely to be protected by the strictest security measures.
And yet, in spite of the best efforts of IT staff and the deployment of asset-tracking tools, like RFID and bar codes, assets are regularly unaccounted for, either due to being lost, misplaced, stolen, or misappropriated while being moved, retired, and serviced.
Without an automated asset-tracking mechanism, an asset removed from a rack one day might be misplaced for months. The opportunity for losses increases when devices are outside of a server facility for maintenance, overhauls, lease returns, data backup storage, or end-of-life disposal.
Solution: Real-Time Asset Visibility
The on- and off-network vulnerability of physical assets can be reduced or eliminated when their locations are dynamically monitored with a wire-free tracking system.
An automated asset tracking system can help provide a continuous stream of data to track and monitor assets from the day they are provisioned to the day they are retired. Moves, additions, and changes are linked to continuous data generated from each device
Download Free White Paper
Learn about all of the challenges facing IT asset vulnerability in a new white paper from RF Code.
Wire-Free Sensor Networks Let Data Centers Lose the Wires, Gain Greater Control Over Their Environments, and Lower Operating Costs
Too hot. Too cold. Too remote. Too complex. Too costly. The challenges in today’s wired data centers are endless.
IT professionals are tasked with controlling data center environments, tracking remote locations, and managing complex and constantly changing equipment and the wires that connect them — all while striving to contain costs. Dealing with these challenges means that IT staffs spend a greater number of hours managing data center environments and monitoring equipment than taking care of other critical business tasks.
Miles of Wires
The traditional approach to environmental monitoring and operational management in data centers has been to deploy large, wired monitoring systems across the network. However, these wired systems are typically costly to install and reconfigure, especially as servers, software, circuitry, and other devices are updated, replaced, and repaired.
Sensor technology emerged to collect and report data to a central device. They gave data center environments the ability to centrally monitor networks and improve network management. But the majority of these solutions employ wires to operate.
Losing the Wires
The next evolution of this technology is a completely wire-free solution that requires no wired connections between the sensors and readers. As a result, wire-free sensor networks offer a significantly improved alternative that:
Is simpler and less expensive to install and manage
Ensures less interference with existing infrastructure
Boosts performance capabilities
Is easier to deploy across a network, even to the most remote regions
The combination of a lower cost installation and higher levels of control over the data center environment with wire-free sensor networks delivers six significant advantages to data center monitoring, management, and control.
A new white paper from RF Code explains all of the benefits of the wire-free advantage.Check it out today.
Agility is an overused term that has been applied as a solution to many problems. As a result, there are many definitions of the word ‘agility.’ We prefer a fairly broad definition, which is:
“The ability to change direction quickly and effectively, in response to, or in anticipation of opportunities or threats.”
Agility is commonly referenced with words such as speed, flexibility, inventiveness, and nimbleness.
Does this sound like the pressures facing an IT Manager today?
Agility has become increasingly important in today’s changing world with the shift to cloud computing and virtualization. With growing, and variable, demands on computing infrastructure, today’s IT Manager must consider being very agile in developing and executing operational plans. Unfortunately, the majority of IT Managers are reacting to this challenge by overprovisioning their computing resources, in processing power, storage, and the energy and cooling requirements needed by those resources. According to a recent article in DataQuest, a whopping 85% of IT Managers are overprovisioning their data centers by as much as 40%. And that’s bad, and wasteful, for the business that they’re operating. An effective Agile Data Center must create future proofing without wasting money on overprovisioning.
The Business of Operating an Agile Data Center
IT has become a significant cost center for most companies. It is the responsibility of the IT Manager to operate their data center as a business, which means minimizing expense, while maximizing value. While becoming increasingly focused on the business of their data centers, IT Managers should focus on the following elements:
- Their IT Orientation: What services they offer to the business, how it’s offered, what are the SLAs associated with it, and how they meet the SLA criteria
- Financial Insight and Discipline: Collect and analyze costing data, utilization rates, trends, and present them in a way that enables meaningful budget conversations with stakeholders
- Governance: Understand your commitments, and how to reliably and consistently meet them
An effective IT Manager knows how to manage and leverage each of these elements very well. But a world class IT Manager knows how to manage these elements and be agile. This means efficiently operating the data center as a business, but also being able to efficiently and quickly meet new, fast paced demands of the business. And it means not meeting these demands through wasteful and costly overprovisioning!
So, how does an Agile IT Executive do this?
Lots of data. Data about their equipment. Data about their environment. Realtime Data. Knowing every element of what exists in the data center, where it is, and how it is.
RF Code is the Source of Data
Operating a business effective, agile data center requires data. RF Code is in the business to provide IT Managers with data on what equipment is available to them, where it is, and how it is. This data is provided in real time, meaning any change that needs to happen immediately is made with knowledge of current data, not data acquired last week, last month, or last year. In fact, in the burgeoning market of Data Center Infrastructure Manager (DCIM), a collection of technologies intended to make IT Management increasingly agile, RF Code’s solution exists in Gartner’s Magic Quadrant for data collection.
Contact us, and let us have an RF Code rep show you how our solutions will make you a world class Agile IT Manager.
Over the past twenty years, the growth of Professional Services in the IT industry has enjoyed dramatic growth, outpacing the growth rate of both hardware and software, and more than doubling in size over the past 10 years, according to Code and Data, Inc. The reasons for this level of growth are well understood: professional services provides the IT manager with the ability to quickly allocate experienced, specialized talent with high productivity rates onto different IT projects, with the added benefit of freeing up IT managers to focus on issues related to the core business of their companies.
A relatively new Professional Services offering being provided by many services companies is IT Asset Management, or ITAM. At the same time as the IT paradigm shifts to cloud computing and virtualization, IT managers are under enormous pressure to cut costs and increase efficiency. Effective ITAM is a critical solution to both trends, because ITAM gives you complete control of the physical, financial, and contractual attributes of your computing systems and network devices. With ITAM, you know precisely:
In the same way that the general IT industry has benefited from Services, implementing or improving your ITAM solution will benefit by utilizing ITAM Professional Services. Leveraging services provides the perk of experienced asset management talent, who will provide you with the best possible ITAM solution. A successful ITAM services engagement allows you to:
Get the best possible Return on Investment
Maximize the lifecycle of your assets
Help devise effective solutions for managing your assets
Custom tailor your solution to specific business requirements or 3rd party software packages
Provide a knowledgeable assessment of risks and vulnerabilities
There are several Services Models that can be used in an ITAM deployment:
In the first model, a professional services team can provide the initial installation and startup of an ITAM solution. This includes evaluating the IT environment and business needs, and tailoring the solution to maximize its effectiveness. Before the services team wraps up and leaves, they will educate the IT team on the ITAM solution and the process for effective asset management.
In the second model, the entire ITAM process can be operated as a Managed Service, meaning that a professional services company will not only do the initial install and startup of an ITAM solution, but also remain on site for a contracted period of time to operate the solution, and ensure compliance with a set of predetermined requirements. The benefit of operating as a Managed Service is that the cost of having an effective and successful ITAM solution is fixed, and there is no need to allocate additional IT resources internally.
The final ITAM services model is providing periodic professional services, providing help to an existing IT organization for tasks such as asset recovery or physical inventory.
RF Code’s active RFID ITAM solution is ready for utilization for any of these services. We offer in-house Professional Services for installation of our products, as well as providing lab based services to do custom software integration or modifications. In addition, we have a network of 3rd party services providers to provide the full range of ITAM services.
Your RF Code sales representative is ready and available to discuss with you the benefits of our ITAM solution, and the options available to you to implement with Professional Services.
Earlier this month CIO Insight published a brief feature highlighting the top ten business priorities for data enter professionals in 2013. Based on the findings from the Uptime Institute Network’s annual survey of data center professionals, the feature noted that data center professionals need to “find better ways to increase the agility of their operations, to help their organizations adeptly adjust to rapidly shifting business conditions [and need to find] new options to do more with less--as in less energy usage.”
A look at the business value-related items that are on data center personnel’s minds reveals some interesting--if not unexpected--information. While the list includes some broad (and perhaps a bit vague) goals like “delivering value to the customer” the majority of the findings support a growing awareness of just how important data center efficiency is becoming to C-level personnel.
As any data center professional would expect, addressing the rising costs of energy in the face of ever increasing data center density is of great concern to the C-level “bean counters.” Not only is energy cost noted, but other energy monitoring and optimization activities such as capacity planning and the need for improved power utilization metrics, indicating an understanding that efficiency and optimization isn’t necessarily strictly about reducing costs, but of ensuring that the data center is providing the best “bang for the buck.”
It’s also notable that this focus on efficiency isn’t limited to the well-understood need for optimized power and cooling as a way to lower operating expenses. Asset management (along with other goals that rely on well-defined asset management policies such as data center capacity planning and expansion, IT and facilities alignment, and data center consolidation efforts) is clearly understood to be crucial to increasing the efficiency of a data center. In all, seven of the ten priorities cited by the survey respondents are directly related to asset management and environmental/power monitoring activities. (And if you consider the retention and proper utilization of skilled data center personnel to be at least somewhat dependent on efficient asset management processes--I certainly do--you could make an argument for eight!).
What I find especially interesting, though, is that nearly all of these priorities rely on proper instrumentation and automation. Certainly nearly any data center manager would assert that cooling and power optimization requires a constant flow of data delivered by sensors reporting temperature, humidity, air flow/pressure, power usage, and so forth to properly monitor and manage power and cooling in the data center. As a representative of a company that offers a wide variety of affordable and easy to deploy wire-free environmental sensors I enthusiastically agree.
But it’s also important to recognize that asset management and asset management–aligned tasks like capacity planning and change management can only be successful when information about assets and their locations within the data center is reliable and easily verifiable--something that traditional manual data collection processes (i.e. clipboards and spreadsheets, bar codes, passive RFID tags and so on) simply do not provide. Only real-time asset management solutions like those from RF Code provide a fully automated inventory data feed. This constant flow of accurate, rack-level asset location information ensures that the information you need to properly manage data center assets and capacity planning tasks is available on demand, 24/7.
So are you ready to learn more about how RF Code can help you achieve your data center business priorities in the coming year? Download our free white paper, Maximizing your Data Center Infrastructure: Centralized Management and Wire-Free Environmental Monitoring, and get started today.
Last week Wired magazine ran an opinion piece by Clive Thompson on the long-predicted but slow-to-arrive "Internet of Things" finally becoming a reality (No Longer Vaporware: The Internet of Things is Finally Talking). The gist of the piece was that after many years of predictions and prognostications, the ability to create and benefit from "stuff that talks" is finally within the grasp of the small business or the Average Joe, and that this has set the scene for explosive growth.
One thing I found interesting, though, is how much focus there was on the sensor side of the discussion. Much of the discussion is focused on the broad variety of sensor-based solutions that are popping up, from the sublime (radiation level-mapping in Japan, earthquake warning systems in Chile) to the entertaining but fairly ridiculous (a tilt-sensing mug to track beer consumption during Oktoberfest), nicely demonstrating the breadth of ways in which these devices are finding their way into our lives.
However, the Internet of Things certainly includes devices that report information other than what they're sensing at a given point in time. There is also enormous value in devices that report their location as well. If you've ever misplaced your car keys (and who hasn't? I suggest you check the couch first ...) then the value of this is obvious. Take that concept and multiply it by hundreds or thousands of assets and you'll see where and why real time location system (RTLS) solutions have emerged based on the availability of devices that automatically report their location.
In a sense, anyone who has implemented an RTLS solution -- in a data center, hospital, distributed IT environment, supply chain, you name it -- has already created their own private Internet of Things, where time consuming and wasteful manual tracking processes have been replaced by a solution based entirely on the assets reporting their location independently. Self-powered wire-free technologies like active RFID asset tags create tremendous benefits for asset managers, capacity planners, financial officers and and so forth by enabling their critical assets to continually report their presence and location, eliminating manual processes, reducing errors, increasing efficiency, and eliminating waste in the process.
So while Mr. Thompson is certainly correct when noting that "the Internet of Things is finally arriving" (certainly in terms of public acceptance and awareness), RTLS solution adopters embraced it long ago. It will certainly be fascinating to see how things change as location awareness transitions from a bunch of independent organizational asset management solutions to just another aspect of how things work.