July 7th, 2009 [by Doug Alder]
You can follow RackForce on Twitter now http://www.twitter.com/rackforce
May 19th, 2009 [by Doug Alder]
In May of this year GigaCenter will open its doors for tours for prospective customers and customers can begin moving their equipment in June 2009. The design is scalable, we are starting with 30,000 sq. ft. and that will scale easily to over 100,000 sq. ft. This helps preserve the green footprint as less space needs HVAC. Internally the design is modular with up to 32 “GigaVaults” which are completely self-contained datacenter suites with it’s own power, cooling, networking etc. Each GigaVault can hold up to 12 x 42 u racks. Racks can be loaded to 1,000 watts per sq. ft. (note that with in-row cooling two 42u racks of in-row cooling would be required for 10 42u racks at 1000w/sq. ft.) As our president and chief visionary Tim Dufour recently said to us:
We have a design that provides the ultimate in flexibility, scalability, and maximum efficiency (Green). The beauty of the in-row cooling is that it allows us to build at virtually any density per vault. We have cold-aisle containment but can also provide hot-aisle containment to accommodate gas-based fire suppression systems. We have monster racks with additional rack space above that can be used for secure internal use (switches etc.) or customer purposes. Extra security can be provided with heavy diamond mesh over the top of the vault. Any version of an in rack ePDU can be provided as per customer requirements, 10 Gbps networking both LAN and WAN, low cost Internet, and the list goes on. I don’t think anyone on the planet can provide this kind of flexibility.
This is just the start folks, lots more to come stay tuned! If you are in need of colocation space and would like a tour of GigaCenter please contact myself Doug Alder (dalder at rackforce dot com ph: +1 (250)448-2203), Jay Robinson (jrobinson at rackforce dot com ph: +1 (250) 717-2340 ext:2303) or Paul Amodea (pamodea at rackforce dot com ph: +1 (604)535-5769 ) to set up an appointment. You had better hurry though as phase I is rapidly selling out and if you miss out on phase I then you’ll have to wait for phase II.
November 19th, 2008 [by Doug Alder]
ICT (Information Communications Technology) is fast becoming one of the leading causes of global warming due to the enormous amount of power consumed in the production and use of ICT devices and services. Finding ways to reduce that power usage is the key to greening your IT.These are tough economic times and for a CIO/CTO to justify changes to the CEO he or she needs to stress the economic rewards, not the technical or green aspects, for making radical changes to the company’s ICT infrastructure. As big and important an issue, not to mention being trés cool, as going green is these days, a company’s bottom line and cash flow trump it every timeAcross the industry servers utilize only between 10 and 20% of their capacity. Often the same can be said for other gear, such as storage, switches and desktops/laptops (such as do your employees have both a laptop and a desktop computer – if so why?). Making better use of this infrastructure is the first step in gaining efficiency in your ICT and by doing so reduce your TCO (Total Cost of Ownership)and increase your ROI (Return on Investment).Often in a corporate structure the answer to putting a new piece of software into use has been to dedicate a server strictly for that program. Not only is this a waste of computing resources, it is a waste of the company’s money. That server will be drawing power and require cooling 24 hours a day, seven days a week, sitting mostly unproductive and draining money from the corporate bottom line to keep it running. Read the rest of this entry »
October 14th, 2008 [by Doug Alder]
Bill St. Arnaud, the Chief Research Officer and one of the leading network architects for CANARIE1 wrote an excellent article on calculating your baseline GHG emission. Bold emphasis in the article is by me.In order to get started in carbon trading it is necessary to first establish your baseline, that is what amount of CO2 is your project/organization responsible for generating right now, and how and where is it being generated. Once you know this information you can begin planning for ways to reduce those emissions and thus earn carbon credits, and the best way to do that is to move your ICT infrastructure to a green data center. RackForce’s current K3 datacenter is very green and it’s new datacenter GigaCenter will be one of the greenest on the planet.
Here a couple of excellent web sites explaining the detail process of how to calculate baseline GHG emission data for your network, ICT equipment or cyber-infrastructure. Once you have established a baseline for your current emissions your organization can then explore how to go about reducing its GHG emissions in order to meet carbon neutrality goals either set by your organization or government and ultimately earn carbon offset dollars from various carbon trading exchanges and/or trusts.Virtualization of networks and computing through clouds or grids using SOA, as well as purchasing green power or moving infrastructure facilities to zero carbon data will be the most likely ways that organizations can reduce their GHG emissions in order to earn carbon offset dollars. But before proceeding with expensive and time consuming baseline GHG measurements, an organization should first determine whether they are ready to move to a world of virtual networks (including virtual routers and switches), virtual servers and cloud applications. If the organization’s “server huggers” are not prepared to let go of their physical computers, routers and switches, then there is no point in proceeding with a baseline assessment.Networks, ICT and cyber-infrastructure are about the only places in an organization where significant GHG reductions are possible. In most organizations in the service sector (education, health, government, banking, finance, telecom, etc) ICT is, by and far, the largest producer of GHG emissions. Although same savings in GHG emission can be made through video conferencing, tele-commuting, tele-work centers and adjusting building heating and cooling systems, these savings will be marginal compared to the savings that are possible through virtualization and use of green power, or relocating ICT equipment to zero carbon data centers.The dollar savings in energy costs and potential to earn carbon offset dollars can be the several of millions of dollars per year for a small to medium size organization (50 – 500 people).You can quickly do your own back of the envelope calculation of the potential dollars (within an order of magnitude) for your organization:
- Each computer server produces 8 tons of CO2 per year
- Each PC or laptop produces 4 tons of CO2 per year
- Each printer or photocopier produces 10 tons of CO2 per year
- Each router produces 20 tons of CO2 per year [commercial datacenter strength routers not your home D-Link style routers, those are about the same as a PC as they use about the same amount of power -DA}
- Each Ethernet switch produces 5 tons of CO2 per year
Carbon offsets are currently trading between $7- $20 per ton, but next year Europe is projected to raise the carbon price from cap and trade to $100 per ton. It is expected that cost of carbon will soon rise to $400 to $1000 per ton over the next few years. The above numbers assume that all the electrical power used by the organization is generated from coal. However, even if your electrical power is from cleaner sources such as nuclear, gas and oil, it is expected that cap and trade will be push up cost of power from these sources at a slight discount of that power produced from coal. True renewable power such as that produced by windmills, hydro and solar systems may trade at a premium to the market, especially within large urban centers.Guidelines for Quantifying GHG Reductions from Grid-Connected Electricity Projectshttp://www.wri.org/stories/2007/09/guidelines-quantifying-ghg-reductions-grid-connected-electricity-projectsThe Purchase of http://www.thegreenpowergroup.org/retail.cfm?loc=us
August 6th, 2008 [by Doug Alder]
Until of late conventional “wisdom” has held the belief that negatively affects a company’s . This view, rightfully, is beginning to change, no doubt in part from the perception of coming .In a very insightful post , President of , makes some excellent points on :
Companies today can be classified in one of five stages as advances toward sustainability. Those stages are:Awareness: Company becomes aware that environmental concerns are permeating discourse, though sustainability as a value is absent from corporate culture.Resistive: Company becomes aware of its own environmental impact of doing business, but demonstrates no commitment to environmental responsibility and possibly some reaction against it.Legalistic: Company strictly focuses on compliance to minimum environmental regulations, with no commitment to raising standards for conservation or energy efficiency.Reactive: Company recognizes strategic value of sustainability opportunities, but pursues only opportunities that do not create new risks.Strategic: Company uses proactive approach to sustainability opportunities and evaluates the impact of sustainability initiatives on the long-term value of the enterprise.In spite of the payoffs that some big businesses have received from many companies still view a sustainability commitment through the lens of compliance. When companies progress beyond compliance and extend their actions strategically, they become more nimble, and better equipped to meet the rapidly changing demands of the marketplace . [emphasis added]
June 11th, 2008 [by Doug Alder]
As you can see from above, this is a datacenter like none other you’ve experienced before. RackForce is now pre-selling colocation space in gigaCENTER so don’t miss out on this opportunity, we expect it to sell out fast. Contact Doug Alder (dalder at rackforce.com) or Jay Robinson (jrobinson at rackforce.com) on our gigaCENTER sales team now to get started.[Update June 11, 1:28pm]IBM has issued a press release on its help building gigaCENTER
KELOWNA, BC and MONTREAL – 11 Jun 2008: IBM (NYSE: IBM) has signed an agreement to help build a $ CDN 75 million, 150,000 square foot “green” data center in the heart of British Columbia with gigaCENTER Services Corporation, in partnership with RackForce Networks.The new facility called gigaCENTER Services Corp will be among the most efficient and “greenest” large-scale data centers in Canada. It is being developed using IBM’s modular approach and will include power and cooling capabilities to support a variety of technologies from high-density blade servers to mainframes.”We are building a data center with IBM in a safe and secure location to respond to growing issues about natural disasters such as earthquakes and floods,” said Tim Dufour, CEO of both RackForce and gigaCENTER. “This center will support the latest technologies using ‘green’ hydro-generated power and the most efficient, environmentally friendly design. The IBM design is calculated at a Power Usage Effectiveness rating of 1.38, which will mean our facility will be among the most efficient in the industry.”
IBM products and services will be delivered over the three-year construction, with the first phase scheduled to open in Q2 2009. When completed, the facility will support 70,000 square feet of raised-floor data center space and create jobs for up to 100 employees.Customers of the new center will be able to rent space in increments as small as one cabinet, up to dedicated cages and private rooms. The center will provide facilities to support on demand server capacity services and Business Continuity and Resiliency Services, delivered through RackForce and IBM Global Services.”A year ago when IBM launched Project Big Green, one of its goals was to help identify ways to optimize data center usage and reduce energy consumption needs,” said Steve Sams, IBM vice president, Global Site and Facilities Services. “This new data center is an example of this initiative. By offering ‘green’ colocation and data center services, gigaCENTER and IBM will enable enterprises to meet their corporate and IT environmental goals.”About RackForceRackForce is a leading provider of green data center infrastructure and network services from its strategically located facilities in the heart of British Columbia, Canada. Through its superior data center design, automated systems and virtualization expertise it provides highly reliable On Demand servers, colocation and connectivity to a worldwide customer base.About gigaCENTERgigaCenter Services Corporation is a leader in the design, construction and operation of premium green data centers engineered to support the rigorous computing demands of today and the future. gigaCENTER will provide power and cooling capabilities to support a variety of technologies including high density blade servers, virtualized server clusters and mainframes.About IBMFor more information about IBM, go to www.ibm.com.
May 8th, 2008 [by Doug Alder]
When you manage your own server there is nothing, other than backing up your data, more important than staying on top of your server’s security. Your business and/or data depends on it. You only need to have followed the headlines these past few years to see how damaging it can be to a company when their customer database gets hacked. How you will protect your server depend on what type of server software you are running: Apache or IIS, and the applications you run on that server. Here are some suggestions -, the first of which is, when it comes to learning about server security search engines are your friends Apache:
Windows Server 2003
PHP SecurityA lot of compromises aren’t Apache, SQL or IIS specific; it’s through add-on modules such as PHP and the scripts that they run. PHP users should consider enabling PHP Safe Mode or PHP in CGI mode. They may also want to look into using Suhosin for PHP (something cPanel allows you to do when recompiling PHP). If you are writing PHP scripts then the PHP Security Guide is an excellent resource.General Advice:Remember, if you are having to deal with these issues it is because you are not on a managed server, you are on a self-managed server. Your provider is there to help but under most circumstances as the server administrator it’s your responsibility to stay on top of your server’s security.These tips and tools are only suggestions based on my background research and my own server experience. If you’re not sure about any tip or tool please make sure to do your own due diligence. Ask your provider’s support staff for guidance (they shouldn’t charge for advice.) There are many tools and methods available out there. The key is to stay on top of security hardening and software patching. Do some research and apply the appropriate measures to protect your server and its contents.
April 15th, 2008 [by Doug Alder]
The will be connecting to a (VDC). The following is from a (emphasis below mine – paper not available online. See here as well for a 2008 interview with , SVP of Enterprise Research at Yankee Group)
Yankee Group’s vision of an ™ is an organization whose employees, customers, assets and partners connect to applications, information and services when and where they need them. Yankee Group’s definition of the Anywhere Enterprise is built on the following five key pillars that will drive data center transformation:
- Consumer technology will continue to lead the way in delivering a personalized IT experience.
- Content will be king as information becomes available everywhere but it needs to be stored and secured in the data center.
- Client devices will continue to evolve, creating an environment where more data will need to be stored centrally in data centers and the end device will become a thin terminal.
- Connectivity becomes seamless, creating a connectivity fabric enabling users to work from anywhere.
- Collaboration will be a key focus as companies look to make key decisions faster by harnessing information from across the extended enterprise.
The shift to an Anywhere Enterprise will bring forth a new era in computing—the virtual data center
Just as it is hard for many of us baby boomers to really grasp and use the technology our grandkids are using and developing today, how it will evolve and around us (see ‘s excellent article on this), so too is it for many in the data center server business to see that the future of s is complete . Nevertheless this is the future. It is a future that all the big software and hardware providers, such as , , , , and many more are planning for. It is also where is headed.There are many reasons for this path, not the least of which are (TCO) benefits and reduced , a factor which is becoming ever more important for corporations as the public, governments and shareholders start demanding better practices. People are becoming ever more connected. As cell phone technology evolves mobile phones have become minicomputers and their capabilities just keep growing. They are becoming thin clients to . This has allowed for an even greater distributed workforce as employees are no longer constrained to working in their office’s desktop environment. This movement puts tremendous pressure on as the work is offloaded from the PC to the server. Enter the virtual data center.Consider a typical data center as it stands today. While the servers may be networked so that they can communicate with each other they each have their own separate hardware resources, RAM, CPU, storage, and those are dedicated solely to that piece of hardware. This leads to both under utilized resources (typical in a traditional data center is between 25% and 35% – source: Yankee Group) as well as maxed out resources and increases the number of physical servers a company must have. This increases a whole range of costs: , cooling, power, hardware, personnel, and so on. The VDC solves these problems.Earlier I said that the big software and hardware companies are planning for a future of VDC. Nowhere is this planning more important than on the network side. Not only does the have to carry out it’s traditional roles, but in the VDC it faces a new challenge; connect all the diverse pools of resources available on the network and create an environment where any resource (RAM/CPU/storage) can be accessed as needed by any device. Obviously this creates a whole separate set of challenges for the underlaying network hardware. To begin with the network must be:
To achieve those goals requires a whole new breed of network hardware and software. Systems that can be upgraded with no interruption to service. That last point is very critical to the successful operation of a VDC. , for one, is working on it. See their , switches and their as examples.IBM is moving forward with their Big Blue project which is morphing into Big Green and 3-D Data centers. The 3-D virtualized concept is quite an intriguing way of managing IT/Data Center resources
IBM is giving new meaning to the phrase “virtual data center.” And it looks a lot more like Second Life than VMware.Rather than build a virtual world for online gaming or to give users an alternative reality, made a virtual world where IT executives can examine and manipulate hardware running in their very real data centers. The IBM project — called 3-D Data Center — gives IT shops a 3-dimensional, real-time virtual view of their data centre resources, even if they are spread across the globe.”It’s a new way to look at systems and interact with them,” says IBM researcher Michael Osias, the man behind this new idea. “Objects aren’t just visualizations. You can think of them as little machines.”So instead of battling wizards and warriors, data centre administrators get to play with their servers and storage ( compare storage products ). And it does look something like a game, even if it is not one, Osias notes. IBM contends its new technology will help businesses identify underutilized machines that can be eliminated, distribute workload among data centers, monitor power and cooling, and move processing to cooler sites depending on the weather.Using avatars, IT operations executives move through their virtual data centres, viewing “a tailored 3-D replica of servers, racks, networking, power and cooling equipment.”
For a different look at how this is being implemented see this article on Ugo Trade. For the same reason that online 3-D virtual reality games, such as Second Life, became much more popular than the older text based RPGs, adding that 3-D level of abstraction makes managing IT resources much easier. It makes it more intuitive for starters.So you can see the overall sense of the virtual data center, save energy, save labor, save hardware costs, increase , access from via any device, this is the gist of . It is the future fast approaching. Even Google has now opened up its cloud to developers when they announced this month that application developers could beta their new application (if written in Python) using Google’s internal resources, their massive server cluster.When our new is completed Q2, 2009, will be incorporating the very latest in virtual . We have led the way in offering to date with our (DDS) server strategy (easy, worry free, work free upgrades as you grow), the first to offer using Microsoft’s software () and RackForce promises you we will continue to be a leader in the future as well . We see virtualization as the future of computing and we will be pushing forward strongly towards .
March 3rd, 2008 [by Doug Alder]
[tag]RackForce[/tag]‘s staff and principals take the state of the [tag]environment[/tag] very seriously. [tag]British Columbia[/tag], [tag]Canada[/tag], is truly one of the most beautiful regions in the world. From the Pacific coast through to the Rocky Mountains, BC is the place for outdoor adventure and many of RackForce’s staff are avid outdoor adventurers. We are grateful for the opportunity to live and work here and thus are committed to preserving what we have.British Columbia has been a crucible of environmental activism and change for many decades now. Indeed one could even make the case that the whole modern environmental movement really got its start here in 19711 when Greenpeace was formed to protest the Amchitka underground nuclear tests in the Aleutian Islands off of Alaska. Add to that the many protests to save virgin coastal rain forests such as the Clayoquot Wilderness Area and Haida Gwaii, and you can understand why we take global climate change seriously here.The above is to underscore why we at RackForce take the environment, and what we can, and should, do for it seriously (see the previous two posts on [tag]Gigacentre[/tag] and [tag]datacenter green technology[/tag] for a background.) Following is some of what is happening in BC recently on the carbon front and how it applies to RackForce.The BC provincial government just this past month brought in a revenue neutral carbon tax on fuels that will gradually ramp up over the next 4 years. Last year BC Premier Gordon Campbell was the only Canadian representation at the [tag]ICAP[/tag] summit in Lisbon where BC joined many European countries and US states in signing on to the [tag]International Carbon Action Partnership[/tag] agreement.
(The Vancouver Sun) The carbon tax will apply to virtually all fossil fuels, including gasoline, diesel, natural gas, coal, propane, and home heating fuel. B.C.’s carbon tax, the provincial government claims, will be the most comprehensive in the world.[snip]the new [tag]carbon tax[/tag] will begin July 1, starting at a rate that will have drivers paying about an extra 2.4 cents per litre of gasoline at the pumps.[snip]The tax will then increase each year after that until 2012, reaching a final price of about 7.2 cents per litre at the pumps.
This is certainly a start in the right direction. However, if it is to make a substantial difference in the province’s GHG emissions this needs to be applied across all industries and products, not just fuel products. RackForce, because it uses zero carbon [tag]hydroelectric power[/tag], will be minimally affected by this tax. As carbon taxes become more prevalent across North America and Europe data centers that rely on non-renewable energy sources are going to find themselves at a tremendous cost disadvantage. As this is an industry with very low margins it will not be surprising to see some of the commercial datacenters go out of business and as mentioned in a previous article it will make ever more sense for corporations running their own in house datacenters to seek out companies [tag]zero carbon[/tag] [tag]datacenters[/tag], like RackForce, to do their [tag]hosting[/tag] and/or [tag]server co-location[/tag] for them so as to not only avoid those tax penalties but in many cases be able to claim green carbon credits for doing so (if they are involved in cap and trade as well.)Carbon taxes, because they are not open to the types of abuse and cheating (at the corporate level) that [tag]cap and trade[/tag] is (see video below), are ultimately better than cap and trade systems, as long as they are mostly revenue neutral (and there’s the rub) for the government. That portion of that tax revenue that is not returned to the general public, in the form of other lowered taxes, must be allocated to support environmental research whose goal is to develop technology that further lowers [tag]GHG[/tag] ([tag]Green House Gas[/tag]) emissions. Doing anything else with those taxes would most likely be viewed by the voters as just a tax grab. By making GHG emitting products more expensive and then rebating that increased revenue stream through separate tax reductions, the overall cost to most members of society will remain approximately the same while making the perceived cost of using inefficient GHG producers much higher. This will cause consumers to look for ever lower GHG producing products in order to reduce their tax burden. That search for lower GHG producing products (which therefor carry a lower tax burden) combined with government investment, of some of the carbon tax generated, into GHG technology research companies, is a great economic development incentive for new and existing companies in that area of research, within the political boundaries of the tax collecting entity. The net effect of carbon taxes (when done with appropriate tax penalties) is the reduction of GHG whereas that is by no means certain with cap and trade.Here’s an interesting look at the relative values of [tag]Cap and Trade vs. Carbon Tax[/tag]
Or, as two former senior BC provincial civil servants (Bruce McRae Assistant Deputy Minister Forests and Energy ministries, and Don Wright Deputy Minister Forest and Education ministries and Secretary to the Treasury) , said last year in a report .
Carbon taxes are the most effective approach because they can apply equally to consumers, businesses and industries, and serve as incentives for all those groups to reduce their energy consumption”The market will go to work. The cost of energy will rise, which will provide businesses and households with an incentive to consume less and demand more fuel-efficient vehicles, equipment, appliances and buildings. Businesses will pursue technologies that result in less greenhouse gas emissions.” [ed. emphasis mine]
that is exactly what RackForce is doing, pursuing a course of business using technologies that reduce our carbon footprint to as close to zero carbon as possible. Join us in helping our planet!
1 well it really got its start with the publication in 1962 of Rachel Carson’s seminal work Silent Spring but the movement truly took off when Greenpeace was formed and generated international publicity through its confrontation with authorities and corporations over bad environmental practices.
February 28th, 2008 [by Doug Alder]
In the last post (Gigacentre: Where we are going) I talked a bit about . The fact is everybody is talking “” but very few are actually being proactive about it. The importance of can not be overstated, and not just for the obvious . As more and more companies begin to report their as part of their focus will be drawn to the large role their plays in that reporting. Offloading corporate IT infrastructure to green, zero carbon data centers will be suddenly be very attractive to corporate Accounting, PR departments and CTOs (far less physical infrastructure to manage.)In a February 21, 2008 interview with a major finance magazine (not yet published), Tim Dufour, ‘s company president, said:
the old ways of IT, especially for the mid-market enterprise (less than 1000 employees), are changing. In the mid-market you often see multiple branch offices with small numbers of inefficient servers operating in back-room closets with inadequate cooling and obsolete UPS electrical systems. By contrast, today you see more and more mid-market companies and many are outsourcing to larger more efficient data centers, which might be the best way to make their IT green. The customer needs to be savvy though, as we see most data center infrastructure providers claiming to be green, even when they are using as their power source, their data centers are 10 to 20 years old with inefficient cooling/UPS and simply are not designed for today’s server density.
Here’s the reality check for you the datacenter customer. If your current data center is operating like the ones Tim just described then they really aren’t green. What they are doing is using some form of carbon offset trading to claim green status but as I explained in the last post that is not the same thing as zero carbon. To be green a data center really needs to be , not carbon neutral 1.
If it were then no matter how much efficiency we achieve in lowering power usage per server, the resulting cost savings, when passed on to the consumer, will result in even more servers being used and thus more GHG being produced.
For datacenters zero carbon is greener because it causes an overall reduction in GHG through the centralization, virtualization and concentration of server usage and resources in one place, rather than in multiple inefficient data centers, thereby allowing those inefficient datacenters to be closed or scaled down.Enter Gigacentre, the future of green datacenters; designed from the ground up with the latest in green technologies:
The single biggest factor in costs for a data center is the and the greatest use of that power is in cooling the tremendous amount of heat the servers produce. Cost for powering the servers and HVAC is nearing, or exceeding, the cost of the servers and hardware themselves.
IDC estimates for every $1.00 spent on new data center hardware, an additional $0.50 is spent on power and cooling, more than double the amount of five years ago. According to Gartner, 70 percent of CIO’s are reporting that power and/or cooling issues are now their single largest problem in the data center. Gartner estimates that 50 percent of data centers in 2008 will have insufficient power and cooling capacity to meet demand with 48 percent of the data center budget being spent on energy, up from 8 percent a few years ago.(More at blade.org)
Here’s how that power gets used in a traditional datacenter. As you can see the majority is not spent on actually running the actual servers but on cooling them and providing back up power and power conditioning. That 45% (, , ) spent on cooling is where the biggest opportunity exists to save electricity.
from: – Guidelines for Energy Efficient-Datacenters (.pdf)
Let’s take a closer look at some of the technologies I mentioned earlier and see how they can assist in lowering power usage, both at the rack and at the HVAC level.
|Green Data Center Technologies|
|One problem that confronts datacenters is what do you do with all the open space, the room you have left to grow into? It presents heating/cooling/airflow problems for the parts of the room you are currently using.|
|Efficiency takes on many forms in a data center, from automatically turning overhead lights off when no one is present to running cabling through proper channels to ensure air flow is not interrupted.
While we all understand the importance of a quality infrastructure to support and carry data traffic, there are other areas in the data center where cabling may be hurting your environment. In particular, your cooling capabilities and the degradation of connections over time. The first is rapidly becoming a cost drain. Older cooling units and even the latest and greatest cooling units will suffer if they can not move air into the desired locations. The effect of abandoned cable under a data center floor is an air damn [sic].
Efficiency is a function of proper design of the physical plant which in turn influences work flow and work habits. Gigacentre is being designed with all of this firmly in mind.
|“”||While cooling is never free there are ways to dramatically lower, the cost of it by upwards of 50%, and additionally reduce significant amounts of usage. This is accomplished by using IBM’s revolutionary Stored Cooling System
This stored cooling solution is designed to optimize the efficiency of equipment that is often overprovisioned and running at low utilization and efficiency. It provides a turnkey solution that is designed to be maintenance-free, requiring only an “oil change” to replace the phase change material once every 25 years. By shifting energy consumption for cooling to off-peak hours when utility rates are lower, it helps reduce energy cost.
During the night, when power rates are down, the system refrigerates a large mass of a cooling gel which is used during the day, when power rates are high, to provide the cold air for the chillers, in essence it is a massive cold battery. This allows data centers to maximize the efficiency of their existing chillers thereby reducing the need to overbuild chiller capacity to meet intermittent peak usage times, which is not only a saving in infrastructure investment but also an additional lowering of potential electrical usage and thus an environmental benefit as it lowers the load on the power grid.
|The bigger the physical footprint the bigger the carbon footprint. Datacenters striving for zero carbon need to use every tool at their disposal for reducing that physical footprint. One such tool is blade servers. For example, from an April 7, 2007 press release by IBM
Microprocessors can account for a sizable portion of the power used by a server. The new systems introduced by IBM today are based on low-voltage industry standard processors that provide the same application performance as their higher wattage cousins, but in some cases consume less power.To put this in perspective, consider that for every kilowatt of electricity consumed, on average over a pound of CO2 is released into the environment. For example, with the new low-voltage, quad-core Intel-based blades introduced today, businesses can save up to 60 watts of energy per two-socket blade server and in an enterprise environment with 1000 blade servers can prevent the release of nearly 20,000 pounds of CO2 into the atmosphere over a year. That is the equivalent of the amount of CO2 produced by an air-traveler flying in a passenger jet round-trip from New York to London seven times.
Gigacentre will be using IBM X series blade servers.Blades take up much less room than traditional servers. In a standard 42u server rack you can get up to a 45 percent density improvement when compared to standard rack mounted and non rack mounted stand alone servers.
|Virtualization is the big one. Everything else helps but this technology, especially with the advent of Hyper-V, really reduces the need for space and hardware. By unifying multiple servers through virtualization infrastructure is maximized. The use of multiple virtual environments on the same hardware node allows for easier mounting, operation and maintenance of multiple OS as each environment is separately bootable. because each physical server can now do the work of several servers the need for hardware and the use of electricity is lowered. As you can see, virtualization combined with blade servers leads to a substantial positive impact on the environment.|
|Chilled Water to the Rack||Pre-cooling with chilled water is more effective than with air. Water is a much better conductor of heat than air is. The resulting hot water can be cycled through or used to heat parts of the building that do not contain server racks. See IBM’s IBM Rear Door Heat eXchanger which can remove 55% of the heat (55,000BTU/15,000Watts) coming off the rack. That’s heat that won’t require additional A/C chillers to cool down. Removing heat at the source is far more effective than letting it escape then trying to remove it from a much larger mass of air via .|
|Crucial to the efficiency, and thus the cost and , of a datacenter’s HVAC systems is the layout of its floor space. The use of hot and cold aisles allows an operator to separate the hot air and cold air flows for greater HVAC efficiency and, as mentioned earlier, by running your cabling through proper cable channels, only under (and/or above) the hot aisles you decrease cold air flow interruptions and thus improve cooling efficiency even further. There is an excellent article on this here.|
These are just some of the technologies that will go into making Gigacentre one of the greenest datacenters in the world. Stay tuned for more newsAdd the RackForce blog to your favorite news aggregator today – click here
1 one of the major problems with is that there is no way to accurately show that it really is being offset. There has been a lot of fraud going on with where the money etc does not go into real renewable energy projects but instead into false front operations that appear to be doing projects. Other problems arise when companies claim carbon credits for carbon taxes they pay. The whole carbon trading scheme is so full of holes that it can not be relied upon. See
for more details
|« Older Entries||Newer Entries »|