When considering new Data Center Solutions or the consolidation of multiple sites, many questions arise:
- Where should the Data Center be located?
- How big should it be?
- How much power consumption can be expected?
- What is the uptime target (what tier, how many 9’s do you wish)?
- What are the technologies to use?
- How should the Data Center be laid out?
- How long is the life span?
Upgrading current systems may at first seem easier, but this too will have its own set of questions
- Should we continue to use the same exact products, or start utilizing higher grade options?
- Space is already tight; what higher density options are available?
- What are the distance limitations for expected applications?
- What new standards do I need to be aware of?
Accurate, economical planning plus the sustainable operation of a modern data center are steps companies need to take to meet the requirements of availability. The enormous infrastructural requirements involved lead many companies to opt for an external supplier and outsource their data center needs. Outsourcing companies (such as IBM, HP, CSC, etc.) offer their customers the use of data center infrastructures in which they manage the hardware as well as the system software. This offers the advantage of improved control of IT costs, combined with the highest security standards and the latest technology. In outsourcing, the systems and applications of a company are first migrated to the new data center under a migration project, and are then run in the data center of the outsourcing company. Along with data security, high availability and operational stability are top priorities. The data center must have the ability to run all the necessary applications, required server types and server classes. Due to the increasing number of real-time applications, the number of server variants is soaring as well.
Today’s enterprises are beginning to hit a wall with their old-school data centers. Data centers have become too big and too slow when they really need to be more cost-effective, efficient and responsive. As it is, enterprise IT architects are struggling to keep pace with accelerating business demands for more storage and compute resources, and are unable to take full advantage of new technologies designed to improve infrastructure performance, scale and economics. No more building bigger and more expensive silos of proprietary hardware. What is needed is a complete rethinking of how data centers are designed and managed.
Present-Day Data Centers
- The current convergence reference architecture is a combination of consolidated network storage (SAN/NAS), one wire network fabric (e.g. 10 Gig Ethernet), consolidated servers (likely blades) based on industry standard x86 processors, and server virtualization software.
- The current convergence reference architecture has led to greater utilization of processing and storage resources (and made vendors a lot of money in the process). But this architecture will not scale for big data, mobility, and cloud computing and social web platform.
- Change is being forced from two directions:
– Demands created by new kinds of applications: hyperscale web, mobile, analytics, social, and e-commerce applications.
– Opportunities created by evolving infrastructure technologies: microservers and systems-on-chips, solid state storage (SSD) and server-side flash, in-memory processing, and software-defined everything (SDX).
WECO Systems’ Recommendation
- Enterprises looking to provision infrastructure at scale for big data, cloud SaaS, and mobile need to develop a vision of converged infrastructure for their infrastructure roadmaps that goes beyond the current convergence reference architecture.
- Organizations have realized benefits in availability, management agility, and server CapEx savings through convergence and virtualization. This effort should continue to be a theme of your infrastructure roadmap for the future.