Secondary Disk Storage

Essay by cknight03University, Bachelor'sA, November 2004

download word file, 4 pages 0.0

The invention of the Gutenberg Press lead to the start of the first "information revolution" as the printed word was used to widely disseminate ideas and concepts throughout the world. Structural change caused by how information was controlled and communicated radically changed the social structure of its day. The pace of this information revolution was remarkable, as the number of published books rose from some 30,000 in 1454 AD to over 3,000,000 by 1500 AD [2].

In the late 1940's IBM began research and development on a new technology known as the Tape Processing Machine. Their initial intention was to introduce this form of magnetic storage technology on Remington-Rand's UNIVAC I computer. As the computer industry developed over the next ten years, nearly all computers entering the market employed magnetic tape storage devices. By the mid-1960's technology advancements had again provided new forms of storage for the industry. Magnetic disks began replacing tape as the norm for data storage as they provided much faster retrieval of information.

In 1966, IBM introduced its' first complete storage system, the RAMAC 305, which stored 5MB of data on 2 foot wide platters. While this was a breakthrough at the time, IBM continued its trend of innovation by developing floppy disks in 1968, and making them commercially available in 1971 [3]. Capacity of these floppy disks increased over the next ten years while they decreased substantially in size and by 1982 the 3.5" floppy disk was common in most all workplaces. In that same year the Bernoulli Storage Drive was created by Iomega, thus beginning the trend toward disk cartridge storage that would last well into the 1990s.

The first digital paper to be universally implemented was CD-ROM/CD-Recordable media. In 1979, Sony and Philips defined the CD-Audio standard to allow customer interchange of audio CDs across manufacturers of audio players [2]. The huge success of CD-Audio, coupled with its inherent digital nature, led to a minor modification of the CD-Audio format and to the introduction of the CD-ROM in 1984. The first applications, paralleling paper, were electronic books and magazines that were replicated at a relatively low cost and targeted to vertical markets and consumers [2].

The early 1990s saw the introduction of CD-R media and recorders. This media was unlike CD-ROM in that users could write data to it once, producing customized CD-ROM. CD-R recorders, using media to produce CD-ROMs, became available in the marketplace at reasonable costs and have led to significant acceptance of this technology by industry.

The years between 1990 and 1995 saw an explosion in the use of CD-ROMs for applications ranging from software delivery and electronic games to educational software. The two key factors driving this growth were the rapid increase in the installed base of CD-ROM readers in personal computers and the exponential increase in the number of titles published on CD-ROMs. In 1995, Sony and Philips once again provided a solution by introducing a new electronic media format known as DVD (digital versatile disc) which expanded storage capacity to 4.7GB per disk; up from 640MB on a CD-ROM [1]. The first generation of DVD technology was plagued by a number of problems including the inability to read CD-R and CD-RW media, in addition to slow overall performance. Therefore adoption of DVD technology outside of the consumer video market was initially slow. Second generation technology corrected many of these initial problems and by early 1998 corporations were beginning to embrace the capacity and multimedia qualities of DVD [3].

The years between 1995 and 2000 saw an unprecedented growth in the amount of networked data stored by organizations. This growth was fueled in part by the continued convergence of digital technologies and the escalating use of digital content creation and manipulation tools such as word processors, email, audio and video editors. In addition, the connectivity provided by LANs and the Internet has increased the need for companies to provide information access on a 24x7 basis. To address these needs storage companies are providing larger capacity media such as double-sided DVDs, larger secondary storage devices that provide space for terabytes and petabytes of data. Storage area networks (SANs) that allow data and storage management functions to be off-loaded from the primary network, and a wealth of other storage-related software and hardware products all designed to increase data accessibility while addressing the limitations of IT staffs and resources. While the methods for storing information may have changed dramatically over time, one overwhelming theme is clear - organizations need to efficiently manage and store their valuable corporate information.

The last 15 years has seen a tremendous shift towards decentralization as corporations moved away from traditional mainframe approach towards more distributed client server applications. However, the need to host rapidly changing mission-critical applications in distributed environments with limited network administration resources is driving a significant change in the network topology of distributed environments. Operational managers are moving towards the centralization of both servers and storage where systems can be centrally managed and controlled. The productivity and business losses associated with downtime in mission critical applications are extremely high and lead network administrators to change and reengineer the network architecture [1].

Storage Area Networks (SANs). SANs are a new way of attaching storage by externalizing it from the server, taking it off the server bus, and placing it on a specialized storage subnet. Each processor then communicates to the storage subsystem via this storage subnet. This architecture allows data transfer and storage without impacting the local area network performance - a key limiting factor of current storage solutions [1].

With Server Clustering, servers are linked via a high-speed connection that helps detect faults and ensures rapid (and transparent) fail-over. Storage and other resources are switched over to the fall-back server when transparent fail-over occurs [1]. Both these models provide network accessibility, data access, and system management flexibility allowing for shared storage resources.

Bibliography

[1]Emerging Trends in Data Storage on Magnetic Hard Disk Drives.

In Datatech (September 1988), ICG Publishing, pp. 11-16.

[2]Kozierok, Charles M. "PC Guide Disk Edition" http://www.pcguide.com/disk/index.htm

[3]R. Matick, Computer Storage Systems and Technology, John Wiley & Sons, Inc., New York, 1977. pp. 3-27.