So, this is pretty...
big. At this very moment, researchers at
IBM are building the largest data drive ever -- a 120 petabyte beast comprised of some 200,000 normal HDDs working in concert. To put that into perspective, 120 petabytes is the equivalent of 120 million gigabytes, (or enough space to hold about 24 billion, average-sized MP3's), and significantly more spacious than the 15 petabyte capacity found in the biggest arrays currently in use. To achieve this, IBM aligned individual drives in horizontal drawers, as in most data centers, but made these spaces even wider, in order to accommodate more disks within smaller confines. Engineers also implemented a new data backup mechanism, whereby information from dying disks is slowly reproduced on a replacement drive, allowing the system to continue running without any slowdown. A system called GPFS, meanwhile, spreads stored files over multiple disks, allowing the machine to read or write different parts of a given file at once, while indexing its entire collection at breakneck speeds. The company developed this particular system for an unnamed client looking to conduct complex simulations, but Bruce Hillsberg, IBM's director of storage research, says it may be only a matter of time before all cloud computing systems sport similar architectures. For the moment, however, he admits that his creation is still "on the lunatic fringe."
IBM developing largest data drive ever, with 120 petabytes of bliss originally appeared on Engadget on Fri, 26 Aug 2011 09:35:00 EDT. Please see our terms for use of feeds.
Permalink |
MIT Technology Review |
Email this |
CommentsSource: http://www.engadget.com/2011/08/26/ibm-developing-largest-data-drive-ever-with-120-petabytes-of-bl/
personal development computer apple
No comments:
Post a Comment