Which method of storing information is considered the most reliable and why? A reliable way to store data. The best external hard drives for reliable storage of information Ensuring the safety of electronic copies recorded on external storage media

Which method of storing information is considered the most reliable and why? A reliable way to store data. The best external hard drives for reliable storage of information Ensuring the safety of electronic copies recorded on external storage media

How to ensure the safety of information? Don't rush to answer this seemingly simple question. To begin, take a close look at the advantages and disadvantages of the storage options available. Manufacturers will help you with the advantages, and we will raise the pitfalls from the abyss of information together in this article.

How to ensure the safety of information? What materials should I use? What should you consider when choosing storage media? Don't rush to answer these seemingly simple questions. To begin with, you should carefully study the advantages and disadvantages of the available storage media. Manufacturers will help you with the advantages, and we will raise the pitfalls from the abyss of information with you in this article.

Sometimes a random napkin or an old business card is all it takes to save vital information. But such storage media are unlikely to be suitable for recording a financial report or video from a recent corporate event. In addition, there are vast amounts of information of legal, commercial, historical or scientific value. It must be stored for years or even centuries, and therefore the choice of storage medium is of paramount importance. What to choose in the dynamic world of technological innovations and old proven media? We bring to your attention an overview of the main means of information storage from their most unsightly side.

Paper

Paper is the oldest means of storing information. As is known, spontaneous changes in the properties of paper as a result of aging are associated with changes in the chemical structure and, in particular, its main component – ​​cellulose. The development of technology has had a positive impact on the quality of materials used in production. New technological procedures have significantly improved the physical, chemical and electrostatic properties of paper. Scientific progress has also led to more advanced methods of writing information: soot and feather inks, lead pencils, fountain pens, printing ink, typewriter ribbons and printer inks.

The method of applying information, as well as the quality of the material itself, ultimately determines the long-term storage of data on paper. Our ancestors wrote letters with stylus or carbon-based ink, which does not change its properties for centuries and is a chemically resistant substance. The text was usually applied by physically damaging the surface - by pressing. Typewriters and dot matrix printers used the same technology, in which inorganic dyes were sprayed by contact: first, the paper was pressed, and then the dye penetrated the material to a given depth.

This old method of applying information through mechanical pressing is not comparable to what is used today in conventional inkjet and laser printers. An inkjet printer sprays liquid ink from a specific distance without physically altering the surface. Manufacturers do not disclose the penetration depth of the ink, nor what they are made of. With laser printers the situation is even worse. According to the technology, toner powder is applied to paper, then the sheet passes through rollers heated to a high temperature, and the powder granules are sintered. In this case, the toner is often not absorbed into the paper at all. There are cases when, after a few years, the paint simply fell off the sheet in whole pieces, like fragments of an old mosaic.

camera roll

Things are much better with photographic film than with paper.

Firstly, production technologies, at least for black and white film, are time-tested. They practically do not change, so we can confidently say that the materials will be preserved for a long time, even if you buy the most ordinary film from the nearest photo store. At the same time, the chances of a long life for professional films are certainly higher, since they differ from amateur ones in special additives that slow down the aging process. However, the requirements for storage conditions for professional films are somewhat stricter.

Secondly, unlike paper, photographic film has an expiration date during which manufacturers guarantee the preservation of its properties. After this time, a chemical process begins that causes aging of the photographic film, which can be restrained by observing temperature, humidity and light storage conditions.

A significant drawback in working with photographic film is that the cost of film and equipment (camera or camera, reagents for developing and fixing the image, projectors for viewing finished materials) is relatively high.

Magnetic tape

Surely you remember your old cassette recorder, which was later replaced by video players and VCRs. The information carrier in them was removable cassettes. With the development of information technology, magnetic tape began to be used to store information digitally.

Special devices (streamers) digitally record information on tape, which is stored in approximately the same way as on a computer: in the form of files. Previously, tape drives were widely used to store backup copies of data. Such devices have not taken root in everyday life. First of all, this is due to the difficulty of accessing information recorded on tape. First, you need to rewind it to the place where the necessary information is recorded, and then wait until the data is read into the computer’s memory. Not everyone has the patience for such technological hassles. At one time, expansion cards for the computer were produced, with the help of which it was possible to store data on audio cassettes, and later on video cassettes, using an audio or video recorder together with a card that was inserted into the computer.

The long-term storage of information on magnetic tape largely depends on the quality of the tape itself. For example, there are low-quality tapes, the magnetic layer from which simply crumbles over time, and if you see noise in the video, then reading digital data from such a tape will be problematic. A special tape for the streamer is designed for longer storage of information and more active use. This is due to the fact that when recording onto tape, special encoding of information is used, which allows it to be reliably restored when read, even if some bits of information are decoded incorrectly (the user will not notice anything). In addition, during recording, several copies of data can be created simultaneously (several tracks can be written in parallel to the width of the film), which also has a positive effect on the storage duration.

The problem that potentially awaits every magnetic film enthusiast is the rapid obsolescence of equipment. It is not a fact that in a few years, if your current device breaks down, you will be able to find a replacement for it, even just to read the data and transfer it to a new medium. Another unpleasant point in working with magnetic film: cassettes must be rewound regularly. Otherwise, the contacting layers of the film magnetize each other, which means that the magnetic tape will not be able to reliably store information for a long time. Industrial equipment uses robotic systems that automatically change cassettes as they are filled and periodically rewind the tapes.

Films must be stored with extreme caution, since the magnetic fields that surround us and are completely invisible can damage the information on the tape. Thus, the use of ferromagnetic metal shelving is not allowed. When placing film on steel racks, it is necessary to demagnetize and close the circuits of the rack: connecting the metal parts of the rack with an electrical wire and effectively grounding them. It would not be superfluous to recall that magnetic film, like any media, also requires compliance with a certain temperature and humidity regime.

Floppy disks

Floppy disks are a thing of the past. Literally. They were popular from the 1970s until the late 1990s, when larger and more convenient CDs, DVDs and flash drives replaced them. Drives for 3.5-inch floppy disks can still be purchased freely, but they are practically not installed in modern computers. The reason for the disappearance is obvious - the small amount of information stored on the floppy disk (1.4 megabytes) and low reliability. The same requirements apply to the storage of floppy disks as to magnetic films.

CD/DVD

Low cost and general availability are the main advantages of CDs and DVDs. But, unfortunately, the information on them is often completely (or partially) lost after two or three years. This occurs due to the destruction of the paint layer caused by exposure to sunlight and ionizing radiation.

Sometimes, in large batch production, stamping is used, similar to the production of vinyl records. Unlike regular CDs and DVDs, these discs can last for years.

Manufacturers claim that, subject to proper storage conditions, some types of discs (CD-R, DVD-R) can be used for 100 to 200 years. However, in practice these optimistic statements are not confirmed.

Hard disk (HDD)

Today, perhaps, the most common device for storing information. Hard drives can be internal (installed inside the case) or external (connected to the device using a USB cable). In the latter case, the hard drive has dimensions that allow you to carry it in your jacket pocket and connect it to almost any computer via a USB connector.

Every year the cost per unit of stored information decreases. Information is stored on plates located inside a sealed container and coated with magnetic material. The recording technology is similar to magnetic tape, and the device itself is similar to a floppy disk. The main difference is in the materials used. In addition, the hard drive contains, firstly, electronics that can fail, for example, from a power surge, and secondly, high-precision mechanics. Due to the fact that the read heads do not touch the surface of the disk during operation, the surface does not wear out and can be used to store information for many years.

If handled carelessly (dropped, shaken during operation), hard drives are susceptible to failure. Thus, one sharp shaking of a fully functional disk may be quite enough to lose all the information recorded on it without the possibility of recovery. With careful handling, the discs will serve well for more than ten years with active everyday use. True, recently the quality of equipment leaves much to be desired, since in pursuit of low prices, manufacturers save on equipment and materials.

Flash memory, flash drives

Flash drives are storage media that use electrically erasable non-volatile memory for storage. If magnetic tape, floppy disks and hard drives were invented and widely used at the dawn of the development of computer technology, then flash memory became popular relatively recently. This is due to breakthroughs in chip manufacturing technology.

There are both expensive large-capacity solid-state drives and budget devices known as flash drives and memory cards. Today they are perhaps the most affordable and convenient means for everyday use. The memory card is a completely electronic device and can be connected to the device via a card reader. Unlike them, flash drives do not require additional mechanisms to connect to the computer.

The reliability of information storage declared by the manufacturers is up to ten years. Unlike hard drives, flash drives are not afraid of shaking and falls from low heights. They are lightweight, spacious and have high capacity, enough to record several movies or tens of thousands of documents on one device.

With everyday use, flash drives quite often fail, for example, from static electricity, which damages delicate electronics. The reason may also lie in poor manufacturing and mistakes made by engineers when designing cheap devices, especially flash drives. The latter may fail due to a breakdown of the microcontroller. In this case, information can theoretically be recovered directly from the memory chip using special equipment. If the chip itself is damaged, then it is impossible to restore the data.

Technologies do not stand still. And already today scientists are creating such information carriers that for ordinary people seem to be part of science fiction plots. However, when choosing a storage medium, you should be guided not only by fashionable technological trends, but also by common sense. If a few mobile gigabytes of free space (the size of a standard flash drive) is enough to store information, then there is no point in buying expensive hard drives of gigantic capacity just to impress your friends.

In addition, it is necessary to take into account the costs of both the purchase of the media itself and the costs associated with recording information and maintaining equipment (for example, as is the case with photographic film). In order to ensure reliable data safety, the optimal solution would be to choose not one, but several storage media that can come to each other’s aid in the event of unfortunate damage to one of the media.

Ensuring the safety of archival documents is one of the main areas of work of archivists. Their physical condition and ability to be used for a wide variety of purposes depend on how correctly the document storage strategy was chosen.

Procedures for ensuring the safety of electronic documents can be divided into three types:

  • ensuring the physical safety of files with electronic documents;
  • providing conditions for reading information in the long term;
  • providing conditions for reproducing electronic documents in so-called human-readable form.

Ensuring the physical safety of files

This aspect of ensuring the safety of electronic documents is a practically solved problem, and for all types of storage. This decision is associated not so much with the creation of optimal storage conditions for media with electronic information, but with the physical placement of electronic documents. For that, To prevent computer files from being lost, you need to store in two or more copies, placed on separate electronic media (working and backup media). Then, if you lose one of the media, you can quickly make a duplicate of the files from the remaining one.

The widespread practice of storing electronic documents shows that their working copies are usually located on the organization’s hard drive or server, and backup copies (copies) can be created on a backup server or RAID array, tape drives, magneto-optical and optical disks (CDs). -RW, DVD-RW). Very few owners of electronic information resources separate the archival part from them and store it exclusively on external media. This is natural: the rate of growth in the volume of stored resources lags behind the rate of decline in prices for hard drives, which allows organizations to increase their server potential with a large margin.

Also important choice of media type, its durability. This choice depends on:

  • type of stored electronic documents and their total volume,
  • the expected period of storage of documents and provision of access to them,
  • the nature of the production of the media themselves and the expected modes of their storage,
  • requirements for ensuring the authenticity of documents.

For example, it is better to store large and complexly structured information resources (integrated databases, geo- and multimedia systems, design and construction documentation, original layouts of printed publications) on capacious electronic media in order not to violate the integrity of documents.

For storing electronic documents within 5 years Any modern storage media (including magnetic floppy disks) is quite reliable. The main thing is to pay attention to the reputation of the manufacturing company and the country of origin, which ultimately determines the cost of the media, as well as to comply with the minimum requirements for their storage conditions. As with any product, the rule applies here: cheap is never good. For the same reason, when organizing long-term For storing electronic documents, you should, for example, choose optical disks (“blanks”), the retail price of which will not be lower than 22–25 rubles.

Optical Compact Discs (CDs) unpretentious in storage and quite reliable for 10 - 15 years. No more is required. After this period, you will inevitably have to either rewrite the files onto another type of media (since it will be impossible to read information from a CD), or convert electronic documents into other formats and also rewrite them onto modern and high-capacity media.

Optical discs are considered the most durable storage media. Some manufacturers set the shelf life of their products at almost 200 years. Only practice can show how justified this is, and it is extremely contradictory. On the one hand, there is evidence of the successful use of CD recordings for 10 - 15 years, on the other hand, reports regularly appear about failures to read information from these disks. At the same time, in recent years, there have been especially many complaints regarding access to files recorded on CD-R. Analysts are still finding it difficult to give an exhaustive explanation of the possible reasons: are failures in reading files a consequence of defective CD-R technology or some other factors (violations of technology in the manufacture of “blanks”, violation of storage conditions and modes, technological incompatibility of devices for recording and reading information) .

Particular attention should be paid to the choice of media type in case of possible use of electronic documents as written evidence or judicial evidence. If it is unrealistic to give documents legal force using an electronic digital signature (EDS), then they should be copied in a timely manner onto CD-R - optical disks with information written once.

Creating several copies of files does not exhaust the range of work to ensure their safety. To minimize the cost of maintaining these instances, you need to create optimal conditions for storing storage media.

The specifics of storage conditions and modes are largely determined type of electronic media. For example, for long-term storage of magnetic media, special equipment is needed that would protected them from magnetic and electromagnetic influences environment, or place them away from powerful sources of electromagnetic fields - electric motors, heaters, elevator equipment, etc. Cassettes (reels) with magnetic tapes must be rotated every 1.5 years to remove static voltage and prevent the so-called copy effect. General points when storing any electronic media are to place them in a vertical position, protection against mechanical damage and deformation, pollution and dust, exposure to extreme temperatures and direct sunlight .

Very important compliance with temperature and humidity conditions storage of electronic media. The general recommendations are as follows: the lower the temperature and relative humidity at which it is constantly stored, the longer the carrier retains its qualities. For example, storing polyester magnetic tapes at a relative humidity of 50% and a temperature of +11 ° C ensures the preservation of their properties for 50 years (ISO 18923). According to rough estimates, the same period for CD-R optical discs is ensured by storage at a relative humidity of 50% and a temperature of +10 °C (ISO 18927); for WORM disks - at a relative humidity of 50% and a temperature of +3 oC (ISO 18925).


* Change in indicator per day.
** Change per hour.

As we see, low temperatures help preserve electronic information, however, they are completely uncomfortable for long-term human work. It should also be noted that if the media needs to be removed from storage for use in a normal office environment, it will need to undergo acclimatization. Otherwise, errors in reading information and disruption of the structure (damage) of the media themselves are very likely. But in order to acclimatize the optical disk from the above temperature to +23 - 25 oC, it will take at least 3 hours (preferably a day). The duration of acclimatization of the magnetic tape depends on its width: the wider the tape, the longer it should be acclimatized. It should also be borne in mind that tapes reach temperature equilibrium faster than moisture balance. For example, for half-inch tapes, a change in temperature by 5 °C should take at least 0.5 hours, and a change in relative humidity by 10% should take at least 4 days.

Therefore, when choosing storage modes for electronic media, you should take into account many factors and compare the intensity of use of the media, the costs of maintaining storage modes (which can be very significant) with the costs of regularly copying documents onto “fresh” media. As noted above, when organizing long-term storage of electronic documents, a period of 10 years is quite acceptable for storing the media on which they are recorded. In this case, “office” storage modes are acceptable: for magnetic tapes - temperature +23 °C (ISO 18923), for optical disks - +25 °C (ISO 18927), at a relative humidity of 50%. The “Basic Rules for the Operation of State Archives” establish the following temperature and humidity conditions in archive storage facilities: temperature - +17 - 19 °C, relative humidity - 50 - 55%. Under these conditions, you can expect CD-R discs to have a shelf life of up to 20 years.

Solving problems associated with hardware and software obsolescence

If the problems of physical preservation of files are currently being solved quite successfully, then other aspects of long-term storage of electronic documents are awaiting their methodological justification and technological breakthrough. Emerging problems are associated with the rapid change and obsolescence of computer hardware and software.

With time devices, with the help of which information is read from external media, wear out and become obsolete.

For example, 5-inch magnetic floppy disks disappeared, and after them, computers were no longer equipped with disk drives and drivers for reading them. In the near future, a similar fate awaits 3-inch floppy disks: many modern PC models are already being released without disk drives for them. Devices for reading information from optical disks are also likely to change over time.

The approximate life cycle of such technologies is 10 - 15 years, followed by their rapid ousting from production. Such technological changes must be taken into account when organizing long-term storage of electronic documents. It is advisable to copy documents onto the latest types of electronic media every 10 to 15 years. So the question of whether magnetic tapes or optical disks will retain their qualities after 50 years of storage becomes less pressing. The archives have enough manufacturer guarantees for the next 15 to 20 years.

Reproduction of electronic documents depends primarily on software used:

  • operating system,
  • database management systems (DBMS),
  • text editors and processors (Word, Pad),
  • graphic (ACDSee) and web browsers (Internet Explorer, Opera, Firefox),
  • specialized design (AutoCAD, ArchInfo) and geo-applications (MapInfo),
  • programs specifically designed to work with specific databases.

For the bulk of office work and financial electronic documents with short storage periods, the dependence on changing software is not significant: The software life cycle is estimated at 5 - 7 years. In addition, many modern electronic records management systems and electronic archive systems of an organization (for example, based on such well-known document management systems as DOCUMENTUM or DocsOpen) are equipped with the necessary format converters. IN short-term In the long term, for accessing and reproducing most text, graphic and video documents (but not databases or complex design systems and multimedia), the use of such converters is self-sufficient.

When organizing long-term storage of electronic documents changing the software platform can lead to the complete loss of the document due to the inability to view them. There are several solutions to this problem:

    Migration — timely translation of databases and other electronic documents on a modern technology platform, most often in formats that are used in the organization for the operational management of information resources(so-called “custom formats”). This is a difficult and expensive path. As a rule, simple converters are not enough here. The biggest problems arise with databases. Typically, migration is resorted to to provide access to operational and archival information resources that are important for the organization’s activities and are constantly used in work. In state archives, it is rational to use this method to organize rapid access to the most important or frequently used archival electronic resources.

    When organizing long-term storage of databases and other electronic documents, it is desirable to store them in advance (before transferring them to the archive). migration to “open” or “archival” (insurance) formats. For text documents this is txt, rtf, pdf; for graphic ones - tiff, jpg; for tables and databases - txt, xls, db, dbf. The purpose of such preparation for archival storage is that, if necessary, it is easier to convert documents from insurance formats into the formats of current information systems.

    Sometimes the migration of information resources to other platforms for some reason seems unrealistic or can significantly distort the original electronic documents. This, first of all, applies to complex structured and multi-format resources: documents from design automation systems (CAD) and geographic information systems, multimedia products, etc. In such situations you can use emulators software environment, which, however, can be difficult to do, since they may not be developed for all software shells. That is why, when developing information systems, one should initially focus not only on common storage formats, but also on common operating systems, DBMS and other software. In this case, it may be easier to find the necessary emulators, which can be developed and supplied to the market by the software manufacturers themselves. For example, the operating systems MS Windows\’95, 98, NT, 2000, XP support an emulator of the MS DOS operating system. Since these are widely used operating systems, it is hoped that Microsoft will continue to support emulators of their older operating systems.

    Encapsulation — inclusion of electronic documents in files of cross-platform formats, for example, in XML. Currently, American archivists consider this method as the most optimal for the exchange and long-term storage of electronic documents, although it can hardly be considered a panacea for all problems.

    It should be noted that studies related to the use of emulation and encapsulation for long-term storage of electronic documents are still sporadic. Even if some methods are proposed soon, it will take a lot of time to test them. Therefore, migration remains the only proven method for long-term storage of electronic documents.

Ensuring the authenticity (authenticity) of electronic documents

The problems of ensuring their authenticity are closely related to the methods of exchanging electronic documents and methods of ensuring their long-term storage.

Until now, the main means of authenticating electronic documentation are network resource audit protocols. With their help, you can trace the history of documents and identify cases of unauthorized access to them. However, the weak point of such an authentication system is the protocols themselves, which are in the virtually uncontrolled power of network administrators.

Another challenge is ensuring authenticity in the inter-network (inter-company) space. Without clear ideas about the origin of electronic documents and firm guarantees of their integrity, courts refuse to recognize their evidentiary value. and accepted as written evidence. The exchange of electronic documents is carried out on a confidential basis (for example, email) and their accuracy is guaranteed only by the authority of the owner of the information resource or email address. At one time, it was the unresolved issues of authenticity and integrity of electronic documents that prevented the implementation of the ideas of a “paperless office.”

Since the mid-1990s. There has been noticeable progress in the authentication of electronic data, technologically and legally. Electronic means of protecting the integrity of data and their identification with a specific individual - the so-called digital (electronic, electronic digital) signatures and seals, electronic watermarks, file checksums and so on.

The entire set of digital signatures can be roughly reduced to two classes:

  1. using human biometric parameters - fingerprints, voice timbre, iris, etc.;
  2. using cryptography methods. The last class is called “electronic digital signature” (EDS). It is the digital signature that is considered the most reliable means of authentication in the inter-corporate electronic space.

Legally EDS For a long time it was used only in the private law sphere. To apply it, it was necessary to conclude bilateral or multilateral agreements (on paper), which determined all the nuances of generation, verification, storage of digital signatures and the responsibilities of the parties. The turn of the century became a period of mass legal recognition of electronic means of authentication in open information networks. EDS or electronic document laws have been adopted in most developed and many developing countries.

Legal recognition of digital signature turns this requisite into a reliable means of ensuring the authenticity and integrity of electronic documents, but only those that are in operational use, with a storage period of five, maximum 10 years. EDS has not been suitable for authenticating documents for decades. To understand why this happens, you need to say a few words about what cryptographic authentication and information protection technologies are, defined by law as “analogous to a handwritten signature.”

The Russian law on digital signature helps to reveal the essence of this technology. It defines digital signature as “a requisite of an electronic document intended to protect this electronic document from forgery, obtained as a result of cryptographic transformation of information using the private key of an electronic digital signature and allowing to identify the owner of the signature key certificate, as well as to establish the absence of distortion of information in the electronic document.” (v. 3).

An electronic signature looks like a sequence of numbers and other symbols, which, in fact, allows us to talk about it as a requisite, separate from other details of an electronic document. Technologically, an electronic digital signature arises as a result of the cryptographic protection system performing a so-called asymmetric encryption algorithm, i.e. key encryption(again a sequence of numbers), which differs from the key later used to decrypt messages. The first key is called the private (secret, personal) key. It can only be owned by the person on whose behalf the document is signed. The second key is public, its value can be found out by anyone who needs to verify the authenticity of the digital signature. This pair of keys is interconnected, but the private key cannot be calculated in a foreseeable time based on the value of the public key. Thus, the use of a public key in authentication securely links the signed document to the owner of the private key.

At the same time, the feature of an electronic signature that distinguishes it from a person’s handwritten signature is that it identifies not so much the person who signed the electronic document as a specific document: two different documents signed using the same private key will have different digital digital signature expressions. This is due to the fact that, in addition to the private key, the algorithm for calculating the digital signature also includes other parameters, primarily the so-called hash code of the file/s with the electronic document.

Information hashing algorithms are implemented using hash functions, which in cryptography are classified as unidirectional, i.e. those that are quite easy to calculate, but very difficult to reverse. When using a good hash function, the probability of obtaining the same hash code for two different files is negligible. It is the hash code of the electronic document that guarantees its integrity - that after signing the document it will be possible to easily determine whether changes have been made to it or not. The convenience of hash functions when calculating digital signatures also lies in the fact that they convert digital sequences (files) of various lengths into sequences (hash codes) of a fixed length of 56, 64, etc. bit of information. This saves the computing resources of user computers.

The idea of ​​asymmetric encryption was put forward in 1976 by American cryptographers W. Diffie and M. Hellman. At the same time, RSA appeared, a public key encryption algorithm that is still widely used today. In our country, in 1994, GOST 34.10 was published for the generation and verification of digital signatures and GOST 34.11 for hashing information. Since July 1, 2002, the new GOST 34.10-2001 has been in force, which doubled the length of the signature key (up to 1024 bits). Most digital signature tools existing on the Russian market are based on these standards.

There are different technologies for applying digital signature to an electronic document. Some of them add a hash code, signature and other associated details (for example, a signature time stamp) directly into the file with the document. Others place this information in document-related files. It is largely for this reason that an electronic digital signature generated in one cryptographic protection system cannot be verified in another system, even if they are based on the same encryption algorithms. In addition, Russian digital signature tools - “Verba”, “Krypton”, “Crypto-Pro”, “Corvette”, “LAN Crypto” - often implement different authentication protocols (rules), which also does not contribute to their compatibility. Thus, It is better to check the authenticity of the signature using the same digital signature tool with which it was generated.

It should also be noted that confirming the authenticity of an digital signature is a technologically short-term process. It depends on the life cycle of the digital signature tool - a specific cryptographic data protection system. In particular, authentication of an electronic document becomes impossible after a change in the technological platform or useless after the certificate of an electronic digital signature facility loses its legal force. This means that the authenticity of documents signed earlier is in question.

It is also important question about the durability of digital signatures, which primarily depends on the length of the public signature key. In the mid-1970s. it was believed that it would take tens of quadrillions of years to factorize a 125-digit number. However, just two decades later, using several thousand computers connected via the Internet, it was possible to decompose a number of 129 digits. This became possible thanks to both new methods for decomposing large numbers and the increased productivity of computers and their integration into global computer networks. Currently, when calculating the strength of digital signature generation and verification algorithms, the period of liability for basic banking operations is taken into account. And it does not exceed five years. For example, the first GOST R 34.10-94 used a 512-bit encryption algorithm. GOST R 34.10-2001 already uses a 1024-bit algorithm. According to experts, this GOST will be able to remain resistant to opening only in the next 5-6 years. That is in 10 - 15 years, no one can guarantee that the digital signature generated using this GOST was not falsified a week ago.

But the main problem when authenticating electronic documents signed with an electronic digital signature is that this detail (as well as the value of a separate hash code or checksum that guarantees the integrity of the document) is inextricably linked to the document format. When an electronic document is reformatted (which is inevitable during long-term storage), checking the authenticity of the digital signature becomes meaningless.

The most acceptable method of ensuring the authenticity of electronic documents during long-term storage (especially certified digital signatures) one could consider the use of emulators or converters when playing them. But this practice has so far been little studied. The problems here are seen both in the limited set of these software tools and in possible errors in document reproduction that may occur during emulation or conversion, which again negatively affects the evidentiary value of electronic documents during long-term storage. Encapsulation is probably the most promising method. American archivists see it as a way to solve the problem of the authenticity of electronic documents. But it requires long-term testing and further development.

The need to reformat electronic documents during long-term storage leads to the fact that, in essence, another document appears with changed details and control characteristics: date of last storage, volume, checksum, hash code, digital signature, etc. It turns out that the original electronic document will be impossible to read and use, and its migration copy will not have legal force.

The noted problem - ensuring the authenticity of electronic documents in the long term - is perhaps the most acute and complex today. There are no clear recommendations on how to solve it yet, either in our country or abroad. Now the solution seems to be one: At the office work stage, you should not create and then store documents exclusively in electronic form that imply a long storage period and serious liability of the parties. It is advisable to simultaneously create and store this official document also on paper.

In the context of unresolved technological problems of authentication of electronic information, the “old grandfather’s method” comes first: authentication of electronic documents when transferring them on external media to the archive using paper documents drawn up in accordance with the requirements of GOST 6.10.4-84 and GOST 28388 -89. These GOSTs are technologically and conceptually outdated long ago; many of their provisions are simply not feasible in practice. However, they are still valid and include a rational core that can be used when developing the form of the certification document. Such a document (certification sheet, cover letter, document acceptance and transfer certificate, etc.) must include identification characteristics of files and electronic media and be certified by the signatures of officials and a seal.

Recipe for success

Thus, an analysis of the nature of electronic documents allows us to determine several conditions, the fulfillment of which ensures their safety and the possibility of use for decades:

  1. The archive must accept and store “information objects” (files), including mainly content and contextual information (data). Acceptance for storage of information resources complete with executable programs (shells of applied information systems) may, over time, cause legal and technological problems in their use. Acceptance of computer programs is necessary in exceptional cases, when without this it is impossible to reproduce electronic documents accepted for storage.
  2. In the short term (5-10 years), the safety of documents is ensured by creating backup and working copies of electronic documents on separate media.
  3. In the long term (more than 10 years), it is necessary to migrate documents into so-called software-independent formats (insurance formats), and in such a way that in the future the resulting generation of documents can be recognized as originals.
  4. Electronic documents in insurance formats can be very inconvenient to use and can significantly slow down the time users access archived information. Efficiency of access to archival electronic documents can be ensured by the fact that they will be accepted, stored and/or promptly translated into the formats of the current information system of the organization/archive - user formats. The migration procedure to custom formats should also be focused on the possible recognition of the received documents as originals. This measure is necessary due to the fact that it is difficult to determine in advance which of the formats (insurance, custom or those in which documents are accepted for storage) can become the basis for creating migration insurance copies of subsequent generations.
  5. When ensuring the safety of electronic documents, great attention should also be paid to information security issues: ensuring their authenticity, protection from malicious computer programs (viruses) and from unauthorized access.

Read the continuation of the article in the next issue. Issues of organizing accounting and describing electronic documents during their long-term storage will be considered.

1 See, for example: In a couple of years, information from CD-R will disappear (http://www.rambler.ru/db/news/msg.html?mid=4528814&s=5).

2 2 See: ISO 18923, 18925, 18933.

3 See: ISO 18923:2000. Imaging Materials. Polyester-Base Magnetic Tape. Storage Practices (Polyester magnetic tapes. Storage rules); ISO 18927:2002. Imaging Materials. Recordable Compact Disc Systems. Method for Estimating the Life Expectancy Based on the Effects of Temperature and Relative Humidity; ISO 18925:2002. Imaging Materials. Optical Disc Media. Storage practices (Optical disks. Storage rules).

4 See: INFORMATION MANAGEMENT. Challenges in Managing and Preserving Electronic Records. GAO. United States General Accounting Office. Report to Congressional Requesters. June 2002. GAO-02-586.

5 See: Anin B.Yu. Protection of computer information. St. Petersburg, 2000. P. 121.

6 GOST 6.10.4-84. Giving legal force to documents on computer media and typographs created by computer technology. Basic provisions. M., 1985; GOST 28388-89. Information processing systems. Documents on magnetic storage media. Order of execution and handling. M., 1990.


Memory device - a storage medium intended for recording and storing data. The operation of a storage device can be based on any physical effect that brings the system to two or more stable states.

Information storage devices are divided into 2 types:

    external (peripheral) devices

    internal devices

TO external devices include magnetic disks, CDs, DVDs, BDs, streamers, hard drives (hard drives), and flash cards. External memory is cheaper than internal memory, which is usually created on the basis of semiconductors. Additionally, most external memory devices can be transferred from one computer to another. Their main drawback is that they work slower than internal memory devices.

TO internal devices include RAM, cache memory, CMOS memory, BIOS. The main advantage is the speed of information processing. But at the same time, internal memory devices are quite expensive.

FMD (floppy disk drive)

The use of floppy disks is becoming a thing of the past. There are two types and provide storage of information on floppy disks in one of two formats: 5.25" or 3.5". 5.25" floppy disks are currently practically never found (maximum capacity 1.2 MB). For 3.5" floppy disks, the maximum capacity is 2.88 MB, the most common capacity format for them is 1.44 MB. Flexible magnetic disks are placed in a plastic case. In the center of the floppy disk there is a device for gripping and rotating the disk inside the plastic case. The floppy disk is inserted into the disk drive, which rotates at a constant angular speed. All floppy disks are formatted before use - service information is applied to them, both surfaces of the floppy disk are divided into concentric circles - tracks, which in turn are divided into sectors. The sectors of the same name on both surfaces form clusters. The magnetic heads are adjacent to both surfaces and when the disk rotates, they pass by all the track clusters. Moving the heads along a radius using a stepper motor provides access to each track. Writing/reading is carried out by a whole number of clusters, usually under the control of the operating system. However, in special cases, you can organize writing/reading bypassing the operating system, using directly the BIOS functions. In order to preserve information, flexible magnetic disks must be protected from exposure to strong magnetic fields and heat, since such effects can lead to demagnetization of the media and loss of information.

HDD (hard disk drive)

A hard disk drive is one of the most advanced and complex devices in a modern PC. Its drives are capable of holding many megabytes of information transferred at enormous speeds. The basic principles of a hard drive have changed little since its inception. When you look at a hard drive, you will see only a durable metal casing. It is completely sealed and protects the drive from dust particles. In addition, the case shields the drive from electromagnetic interference.

The disk is a round plate with a very smooth surface, usually made of aluminum, less often - of

ceramics or glass coated with a thin ferromagnetic layer. Magnetic heads read and write information to disks. Digital information is converted into an alternating electric current supplied to the magnetic head, and then transmitted to the magnetic disk, but in the form of a magnetic field, which the disk can perceive and “remember”. Under the influence of an external magnetic field, the domains' own magnetic fields are oriented in accordance with its direction. After the termination of the external field, zones of residual magnetization are formed on the surface of the disk. In this way, the information recorded on the disk is saved. Areas of residual magnetization, when the disk rotates opposite the gap of the magnetic head, induce an electromotive force in it, which varies depending on the magnitude of magnetization. The disk package, mounted on the spindle axis, is driven by a special motor compactly located underneath it. The rotation speed of the disks is usually 7200 rpm. In order to reduce the time it takes for the drive to become operational, the engine runs in forced mode for some time when turned on. Therefore, the computer's power supply must have a reserve of peak power. The appearance in 1999 of IBM-invented heads with a magnetoresistive effect (GMR - Giant Magnetic Resistance) led to an increase in recording density to 6.4 GB per platter in products already on the market.

Basic hard drive parameters:

    Capacity – the hard drive has a capacity from 40 GB to 200 GB.

    Data reading speed. Today's average is about 8 MB/s.

    Average access time. It is measured in milliseconds and indicates the time it takes for the disk to access any area you select. The average is 9 ms.

    Disk rotation speed. An indicator directly related to access speed and data reading speed. The rotation speed of the hard drive mainly affects the reduction in average access (search) time. The overall performance improvement is especially noticeable when retrieving a large number of files.

    The size of cache memory is a small, fast buffer memory in which the computer places the most frequently used data. The hard drive has its own cache memory up to 8 MB in size.

    Company manufacturer. Only the largest manufacturers can master modern technologies, because organizing the production of complex heads, plates, and controllers requires large financial and intellectual costs. Currently, seven companies produce hard drives: Fujitsu, IBM-Hitachi, Maxtor, Samsung, Seagate, Toshiba and Western Digital. Moreover, each model from one manufacturer has its own unique features.

Streamers

The classic method of backup is the use of streamers - devices

recording on magnetic tape. However, the capabilities of this technology, both in terms of capacity and speed, are severely limited by the physical properties of the carrier. The principle of operation of a streamer is very similar to a cassette recorder. Data is recorded on a magnetic tape that is pulled past the heads. The disadvantage of the tape drive is that it takes too long to sequentially access data when reading. The streamer's capacity reaches several GB, which is less than the capacity of modern hard drives, and the access time is many times longer.

Flash card

The devices, made on a single chip (chip) and having no moving parts, are based on electrically reprogrammable flash memory chips. The physical principle of organizing flash memory cells can be considered the same for all manufactured devices, no matter what they are called. Such devices differ in the interface and controller used, which determines the difference in capacity, data transfer speed and power consumption.

Multimedia Card (MMC) and Secure Digital (SD)– disappears from the scene due to limited capacity (64 MB and 256 MB, respectively) and low speed.

SmartMedia– the main format for cards of wide use (from bank and metro passes to identity cards). Thin plates weighing 2 grams have open contacts, but their significant capacity (up to 128 MB) and data transfer speed (up to 600 KB/s) for such dimensions have led to their penetration into the field of digital photography and wearable MMR devices.

Memory Stick– an “exclusive” format from Sony, practically not used by other companies. The maximum capacity is 256 MB, the data transfer speed reaches 410 KB/s, the prices are relatively high.

CompactFlash (CF)– the most common, universal and promising format. Easily connects to any laptop. The main area of ​​application is digital photography. In terms of capacity (up to 3 GB), today's CF cards are not inferior to the IBM Microdrive, but lag behind in data exchange speed (about 2 MB/s).

USB Flash Drive– USB serial interface with a bandwidth of 12 Mbit/s or its modern version USB 2.0 with a bandwidth of up to 480 Mbit/s. The carrier itself is enclosed in a streamlined compact body, reminiscent of a car key fob. The main parameters (capacity and operating speed) are completely the same as CompactFlash, since the memory chips themselves remain the same. It can serve not only as a “transporter” of files, but also work as a regular drive - you can launch applications from it, play music and compressed video, edit and create files. Low average data access time on a Flash disk – less than 2.5 ms. It is likely that USB Flash Drive class drives, especially those with a USB 2.0 interface, will be able to completely replace regular floppy disks and partially rewritable CDs, Iomega ZIP media and the like.

PC Card (PCMCIA ATA)– the main type of flash memory for compact computers. There are currently four PC Card formats: Type I, Type II, Type III and CardBus, differing in size, connectors and operating voltage. For PC Cards, backward compatibility is possible across connectors “from top to bottom”. The capacity of the PC Card reaches 4 GB, the speed is 20 MB/s when exchanging data with a hard drive.

For the normal operation of any business, prompt access and reliable storage of information is important. Technical problems, update errors, cyber attacks and other force factors, in turn, can lead to data loss, and, therefore, financial losses, up to the complete collapse of the company.

We have already written about the deplorable examples of large companies and the importance of backup in the article about 3 strategies

Every day it becomes clearer that ensuring backup of information on the server is the number one need for any business. And the good news is that restoring the entire archive of events, documents and programs is possible with the right choice of backup methods.

In the event of an emergency failure, it is a backup copy of all data that allows for full operational access to all information stored on damaged media.

To copy information from digital media, different methods of backup and storage are used - backup and data redundancy. They are different, but sometimes can be applied at the same time.

Data redundancy allows you to recover files immediately after a failure in the event of a failure. The principle of operation is that if access to a file is lost, it is replaced with a copy of it. This helps avoid downtime in the operation of the site or application and allows the server administrator to return the system to its original working state.

It would seem to be the optimal solution, but it has a number of significant drawbacks. If system failures occur on the entire server, all data may be lost. In addition, every operation in the system affects the saved copy. Thus, in the event of malicious operations on the system, errors will remain in all subsequent copies of the data.

In the case of backup, the data is returned to its original state, and it can be restored for any period of time, depending on the depth of the backup.

Backing up critical information even in the event of an application failure, an entire machine failure, or individual data loss allows you to redeploy, restore, or access that information. The disadvantage of Backup, in contrast to the redundancy approach, is that it takes time to restore information and the equipment is idle. But the data is accurately stored and access to it is guaranteed, with those parameters and from the moment when the user needs it.

The ideal option for storing valuable information is automatic backup to a remote server, which does not depend on external influences and is regularly moderated by administrators. At SmileServer, in every tariff we offer backup and storage of our clients’ data on servers in Germany, which ensures their safety and security in case of any technological failures.

Backup strategy on the server

The optimal strategy for ensuring data safety and uninterrupted operation of user resources is a combination of backup and data redundancy technologies. If one host fails, the machine will continue to operate without failures, since the migration mechanism will work, and thanks to backup technology, all files will be restored from the hard drive.

There are a number of commands you can use to set up backups manually, such as cp and rsync. But to automate the copying process, this approach requires the creation of separate scripts, which is difficult and not always effective. For business tasks, backup is carried out using special tools and utilities, such as BackupPC, Bacula and Duplicity, which we suggest taking a closer look at.

Automated backup solutions

Special comprehensive backup solutions make the procedure easy and do not require active participation and multi-level configuration from administrators.

BackupPC

The solution is available for both Windows and Linux and is installed on a dedicated server or VPS that acts as a backup server. This server then downloads the user files. All necessary packages are installed on one server, and you only need to configure disk access via protocol or SSH. On Smile Server virtual servers, you can implement BackupPC SSH keys during deployment without the use of additional software.

Bacula

A universal and technically sophisticated host backup program based on the client-server model. In it, each backup task is installed as a separate job (Job). This approach allows you to fine-tune, connect multiple clients to one storage, change copy schemes and expand functions using additional modules.

Duplicate

It is a true alternative to all existing backup tools. The main difference of this software solution is the use of GPG encryption when storing information, which increases the security of data storage.

The main advantage of using GPG encryption for backup is that the data is not stored in plain text. Only the owner of the encryption key can have access to them.

Block backup

This type of backup is also called “imaging”. The technology allows you to copy and restore data from entire devices. If during standard backup copies of individual files are formed at the file system level, then when creating images the data is copied in blocks without dividing into files.

The main advantage of block backup is high speed. The problem is that file-based backup initiates the process anew for each individual file, while block file backups have many more than one block per block.

All of the listed technologies and numerous ways to set up your own data backup will help you avoid a catastrophe in the form of irretrievable loss of valuable information or data of your clients.

When placing information on external media (thus, we are talking about the physical level of its storage), the unit of information is a physical record - a section of the medium on which one or more logical records are located. A named integral set of homogeneous information recorded on an external medium is called a file. In fact, the file is the main unit of data storage on B3Y, and it is with files that certain transformation operations are performed (adding data, adjusting it, etc.).

The following types of file data structures are used to store data on external media.

sequential;

index-sequential;

library

There are two possible access options for data in file structures: sequential or random. During sequential access (processing mode), file records are transferred from the VRAM to RAM in the order in which they are located on the media. In contrast, in random access mode they can be retrieved from a file as required by a specific application program.

In sequential files, records are located on the media in the order in which they were received. By means of a buffer, they are all sequentially transferred to RAM for processing.

Backup

A random processing mode is not possible here, since to search for a record by any criterion, it is necessary to sequentially search through all records. Records that are deleted are physically eliminated by creating a new file.

An example is simple text files (ASCII files). They consist of lines of characters, with each line ending with two special characters: carriage return (CR) and line feed (LF). When editing and viewing text files on a monitor screen, these special characters are usually not visible.

In direct files, there is a direct relationship between the record key and its location on the media. When a logical record is written to a file, the record key is converted or mapped to the memory address where it will be located. The main operating mode in this case is arbitrary, although a sequential data processing mode is also possible. The memory space occupied by a deleted entry can be used for a new entry that has received the same address.

In practice, records are often processed using several fields. In this case, the advantages of direct files are practically reduced to nothing, since processing records in them in random access mode is possible only by one key field.

At the same time, it is obvious that the efficiency of data processing can be increased primarily by arranging records in descending or ascending order of the values ​​of a particular field. Such ordering is carried out, as a rule, not in the original file, but in an additionally created one (such a file converted by some key field is called inverted). When processing a file using several keys, you have to create a corresponding number of inverted files. Since each inverted file actually contains the same information as the original, this approach requires large amounts of external memory.

To streamline data processing, you can use index-sequential files - a combination of a data file and one or more index files. The latter do not store the source data itself, but only the numbers (indexes) of the records of the source file, which determine the order of its processing according to a certain key. The index file is processed in sequential mode, and the data file is processed in direct access mode.

A file with a library organization consists of sequentially organized sections, each of which has its own name and contains one or more logical records. At the beginning of the file there is a special

service section - the so-called table of contents, which allows direct access to each section of data.

Test questions and assignments

1. What levels of data presentation are used to describe the subject area?

2. Define the concepts “logical record” and “record field”.

Expand the features of data representation in RAM and VSD.

4. Give examples of linear and nonlinear data storage structures.

5. Describe the types of file structures and features of their organization.

⇐ Previous17181920212223242526Next ⇒

Publication date: 2014-11-18; Read: 1309 | Page copyright infringement

Studopedia.org - Studopedia.Org - 2014-2018 (0.001 s)…

What is a backup

A backup copy is a copy of working files and folders that is created regularly or periodically and provides the ability to restore data in case of loss (damage, theft, accidental erasure). In this article we will express our point of view regarding the location of backup copies of information, i.e. Let's answer the question “Where?” Let everyone choose the most suitable method for storing backup copies. For some, low cost implementation is important, for others, maximum confidentiality is important.

Where is the best place to store backups of your data?

1. Network Attached Storage (NAS)

Image from D-Link official website

Advantages:

  • Relative compactness of the device.

    Possibility to place it in a remote location and camouflage it.

  • RAID1 technology to protect against hard drive failure.
  • Full control over information. A device with information is physically in your hands. Your only task is to protect your files with strong passwords.
    If you don't trust cloud services and believe that administrators are viewing your files, then this option is for you :)

Flaws:

  • The likelihood of losing information due to hardware failure is higher than with cloud storage.

The most secure scheme is when the network storage is physically located in a secret room, and backup copies, protected by complex passwords, are written to it over the network.

2.

Backup storage

Another computer

The option is similar to using NAS.

  • Lower fault tolerance if there is no RAID array.
  • Lower reliability if other people have access to the computer.
  • Bulky. A computer is generally more difficult to disguise than network storage.
  • Higher likelihood of network access problems. The computer may freeze or refuse access. This happens due to the installation of updates or antivirus software.

3. External (portable) hard drive

Image from Western Digital official website

Advantages compared to NAS:

  • Mobility. You can take it with you after making a copy.

Disadvantages compared to NAS:

  • Cannot connect to a computer network directly. Accordingly, it cannot be masked in the connected state.
  • There is no protection against hard drive failure.

4. Cloud storage.

Examples: Google Drive, Yandex.Disk, Sky.Drive

Advantages:

  • Easy to access from anywhere in the world and available 24 hours a day.
    Yes, global access to the NAS can also be configured, but using the cloud, the owner will have much more easier access his information.
  • High speed access to backups.
  • The risk of storage failure and data loss is minimal. Cloud storages of Google, Yandex, Microsoft are located on reliable servers and are serviced by the best IT specialists.
  • Vault theft protection. If thieves broke into the premises and stole the server, network storage and all hard drives, you can restore work data from the cloud.
  • Privacy is higher than that of cloud storage.

Flaws:

  • If you set an insecure password, your mailbox can be hacked by attackers. After this, the information will fall into the wrong hands and can also be simply deleted.

5.

Advantages:

  • Mobility and compactness. The USB stick can be stored in a secret place.

Flaws:

  • Contains a relatively small amount of information.
  • When stored off-site, there is no access to the backup copy.

6.DVD

Advantages:

  • Mobility. Can be stored in a secret place.

Flaws:

  • Small amount of information.
  • Low speed of creating and restoring backups.
  • Fragility and fragility of media.

7. Another hard drive on the same computer.

This scheme is one of the simplest. However, it at least protects against hard drive failure and accidental deletion of files.

Advantages:

  • Instant access to backups.
  • Maximum speed of creating copies and restoring information.

Flaws:

  • Does not protect against computer theft.
  • Does not protect against file damage due to hacking and virus infection.
  • Typically, copies can only be accessed from this computer.

In the article we looked at options that are more or less accessible to the average user. It’s clear that there are more reliable methods than network storage. For example, a server. Or better yet, ten servers connected to a 100-gigabit Internet channel with information synchronization in real time. But such backup protection schemes are used by providers, large corporations, and even the cloud storage services described above.

You might be interested:

9.3 Information security methods

What is information security?

Under information protection means ensuring its safety on computer media and prohibiting unauthorized access to it. Information protection is ensured:

  • file backup;
  • archival copying of files;
  • restriction of access to information;
  • the use of antivirus agents.

Backing up files

Backing up files called the creation of their copies on computer storage media and their systematic updating in case of changes in the backed up files.

How to properly store backup copies of data

The need for reservation is caused by various circumstances. For example, a hard drive may be completely full, and it will be impossible to write new information to it without destroying the old one. Or, during computer operation, damage or complete destruction of the information on the disks may occur. This can happen for various reasons:

  • exposure to computer viruses;
  • incorrect actions or accidental destruction of files;
  • physical damage to the disk or hard disk drive;
  • deliberate actions of some persons.

In this backup method, a simple copy of one or more files or a file structure is obtained, that is, a directory tree with files included in them on the same or another storage medium (disk, magnetic tape, CD, flesh, etc.). Backups take up the same amount of space as the original files. In MS-DOS these are the commands COPY, XCOPY, DISKCOPY. Norton Commander, FAR, etc. have similar commands. Copying files, directories and disks in Windows is done using the clipboard or another method. File backup is also used when transporting files from one computer to another, if they are not connected to a network.

Archiving files

The main feature of archival copying of files is file compression in order to reduce the space occupied by the archive copy on the computer storage medium. With this backup, one archive file is created, which is a set of one or more compressed files, from where they can be extracted in their original form. The size of the compressed file is two to ten times smaller than the size of the original file. The degree of compression depends, firstly, on the type of file, and secondly, on the archiver program. Database files and text files are compressed the most, and binary program files (such as EXE and COM) are the least compressed. The process of writing files to an archive file is called archiving (packaging), extracting files from an archive – unzipping (unpacking ), and the archive file is archive .

Archive An archive file contains a table of contents that lets you know what files are contained in the archive. Some archivers can create multi-volume archives.

Archiving is done using archiving programs. The most common archive programs have approximately the same capabilities, and none of them is superior to others in all respects: some programs are faster, others provide better file compression. Functions performed by the archiver:

  • placing files in an archive;
  • extracting files from the archive;
  • viewing the table of contents of the archive;
  • sending files to and from the archive (after transfer, files are deleted from the source);
  • catalog archiving;
  • checking the integrity of the archive;
  • recovery of damaged archives;
  • protect archives using a password.

Restricting access to information

Under restriction of access to information is understood to exclude unauthorized access to it. It is provided by software and hardware:

  • application passwords;
  • file encryption;
  • destruction files after they have been deleted;
  • usage electronic keys;
  • production of computers in a special protected performance.

Passwords

Passwords are used to identify users and delimit their rights on a computer network and to limit access of users working on the same computer to various logical drives, directories and files. Various levels of password protection can be set. For example, reading a disk is possible without entering a password, but changing, deleting, or saving a file on a protected disk requires a password. Password protection of files does not require encryption.

Encryption

Encryption a transformation of data such that it can only be read using a key. Encryption is a science called cryptography. In cryptography, any plaintext is called open text, and the encrypted data is called encrypted text. Modern encryption algorithms are a complex mathematical problem, the solution of which, without knowing the decryption key, requires performing a huge amount of calculations and obtaining an answer, perhaps several years later.

Drive protection

When you enable disk protection against unauthorized writing, a resident module is loaded into memory, which displays a message about a write attempt. In response, the user must allow or deny the recording. This type of protection reduces the likelihood of information destruction due to erroneous user actions, and also allows you to detect possible viruses.

Displaying (visualizing) the process of reading or writing to a disk draws the user's attention to this process so that the user can evaluate the legitimacy of access to the disk.

23.05.2018

A reliable way to store data. The best external hard drives for reliable storage

With the advent of computers, the issue of storing information that was initially provided in digital form became very acute. And now this problem is very relevant, because you want to save the same photos or videos for a long memory. That is why you will initially have to find an answer to the question of which devices and media are used for long-term storage of information. You should also fully appreciate all their advantages and disadvantages.

The concept of information and methods of storing it

Nowadays, you can find several main types of information data on computers. The most common forms are text, graphic, audio, video, mathematical and other formats.

In the simplest version, computer hard drives are used to store information, on which the user initially saves the file. But this is only one side of the coin, because in order to view (extract) this information, you need at least an operating system and corresponding programs, which, by and large, also represent information data.

It is interesting that in computer science lessons in schools, when choosing the correct answer to such questions, one often comes across the statement that, supposedly, RAM is used for long-term storage of information. And schoolchildren who are not familiar with the specifics and principles of its work consider this the correct answer.


Unfortunately, they are wrong, because RAM only stores information about currently running processes, and when they terminate or the system is rebooted, the RAM is completely cleared. This is similar to the principle of the once popular children's drawing toys, when you could first draw something on the screen, and then shake the toy, and the drawing would disappear, or when the teacher erases text written in chalk from the blackboard.

How information was stored before

The very first method of preserving information in the form of rock paintings (graphics, by the way) has been known since time immemorial.


Much later, with the advent of speech, the preservation of information began to be a process, so to speak, of transmission from mouth to mouth (myths, legends, epics). Writing led to the appearance of books. Paintings or drawings were not forgotten either. With the advent of photography, sound and video recording technologies, corresponding media appeared on the information field. But all this turned out to be short-lived.

Device for long-term storage of information: basic requirements

Archival storage. In this case, it is assumed that important information will be stored for a long time while providing quick access to it, which dictates very specific requirements for storage technologies and equipment, in particular, long-term storage of large volumes of information in an unchanged form. Robotic optical disc libraries meet all these conditions.

It should be noted that in most European countries and the United States the need for archival storage of key business information is enshrined at the legislative level. About 25 thousand directives have been adopted worldwide, including decrees of governments and individual ministries in Germany, Italy, the USA, Great Britain and other countries, requiring the storage of data on financial transactions, stock exchange transactions, medical research and insurance payments for five to ten years .

Legislative standards for data storage are being actively developed in our country. Russia's planned accession to the WTO is a powerful catalyst for this process. In the near future, many companies will be legally required to store data for a long period of time, meaning they will have to upgrade their storage systems. Therefore, the global growth rate of the archive storage market in Russia will most likely be significantly exceeded.

FEATURES OF ARCHIVE STORAGE

The first and most important requirement for an electronic archive is the exclusion of the physical possibility of deleting or changing data either through negligence or malicious intent. In other words, the information carrier must provide a single write when read many times (True Write Once Read Many, True WORM). As a result, data protection from deletion should be not software, but hardware. In addition, shelf life and high media capacity are key requirements. This allows you to significantly reduce the system's total cost of ownership (TCO) and meet the storage capacity demands of the largest companies, including enterprises in the public and industrial sectors.

From the above conditions it follows that neither RAID arrays nor tape drives can cope with the task of archival data storage. Despite this, in Russia the bulk of information resources are stored on hard drives or RAID arrays. Even information that requires long-term and reliable storage is trusted to hard drives. Meanwhile, the very principle of operation of a hard drive implies constant mechanical movement, which implies malfunctions of the device and periodic loss of information. Manufacturers do not guarantee the performance of a hard drive for decades. While entrusting the most valuable data to RAID arrays, users sometimes do not pay attention to the fact that RAID technology was created to compensate for the unreliability and fragility of the hard drive.

Similar questions arise when trying to build an archival data storage based on tape drives: the fragility of the medium forces you to periodically transfer data from the old tape to a new one. In addition, the tape requires maintenance - if it is not in use, it must be rewound regularly to prevent demagnetization. This technology has other disadvantages, in particular, direct access to an arbitrary file on tape is not possible.

To solve the problem of archival data storage, a new class of specialized devices was developed - archival drives. These robotic optical disk libraries, controlled by custom software, enable the construction of a robust storage system to support automated information lifecycle management.

HARD DRIVE FAILURE STATISTICS

Google Inc. conducted an independent analysis of hard drive failure statistics. The accumulated database (more than 100 thousand HDD copies) is many times larger in size than any other similar study that has been published.

The results clearly demonstrate the ineffectiveness of using hard drives in long-term archival storage systems: the cumulative failure rate of hard drives reaches 25% by the end of the fourth year of operation (see Figure 1). As a result, HDD-based systems must be redundant, support migration and backup infrastructure, and undergo frequent maintenance. This explains the high total cost of ownership for hard drive-based archives.


For the construction of large information storage systems, it is essential that in a multi-disk array (more than 10 hard drives), continued operation without maintenance becomes unlikely just a few years after the start of operation (see Tables 1 and 2), and more than half of the failures cannot be predicted with using modern built-in failure prediction technologies (SMART).



Even with constant maintenance, backup and replacement of disks in the system, users should take into account that, according to statistics, more than a third of all HDDs fail in the fifth year of operation. Given obsolescence, this leads to significant difficulties in ensuring timely replacement. Thus, to reduce the risk of data loss, it becomes most advisable to completely replace the drives after three to four years of operation, which entails additional costs.

RELIABILITY OF INFORMATION STORAGE ON OPTICAL STORAGES

According to the Enterprise Strategy Group (ESG), of all existing technologies, robotic optical drives (DVD/BD libraries) are optimal for long-term data storage, using which the total cost of storing information is significantly lower than in the case of alternative technologies.

The immutability of data stored on optical media is guaranteed at the physical level, since the recording process represents an irreversible change in the structure of the disk as a result of crystallization of the amorphous layer, which complies with the True WORM write-once standard. The stored data cannot be erased or changed - it is read-only.

The most common type of optical media used for modern archival storage devices are DVDs. DVD manufacturers produce discs with a special hard coating, which guarantees the safety of information and fully complies with the international ECMA standard, while the service life of the media exceeds 30 years.

Thus, optical technologies provide the following advantages:

    They guarantee exceptionally reliable data storage for decades;

    The True WORM specification is supported at the physical level, since during the recording process an irreversible change in the state of matter occurs;

    The capacity of one media is already 50 GB. This allows you to create data warehouses of significant volume and expand them if necessary;

    Blu-ray Disk technology provides random access to data, and the speed of positioning the laser head on the disc is the same as that of hard drives.

RESEARCH METHODOLOGY

To confirm the service life of the discs, their samples are tested using the artificial aging method. The discs will meet the standard if 95% of the samples have a predicted shelf life exceeding 30 years.

During testing, disk read error rates are determined. If the corresponding critical levels are exceeded, then the reading errors become unrecoverable and the sample becomes unusable, after which the time to failure is calculated. Based on the results obtained, the expiration time under normal conditions is determined.

During testing, discs are placed in a special chamber at elevated temperatures, and diffusion processes in the media are activated, which simulates the natural aging of the material. In addition, the discs are tested under conditions of high humidity, aggressive environments, the influence of microorganisms and dust, and mechanical stress.

First, the performance of the disk is measured at high temperature. In each subsequent experiment, the temperature is lowered by 50C and raised to 600C. With each step, the service life of the disk increases. Room temperature data is approximated based on the shape of the resulting performance curve. Thus, for a polycarbonate substrate, the shelf life of discs at room temperature reaches 133 years.


A special hard coating ensures long-term preservation of information recorded on DVD due to better protection against scratches. This is confirmed by tests on the HEIDON-14 tester: scratches are applied with a steel ball with a diameter of 7 mm with a non-woven backing at a speed of 1000 mm/min (see Figure 2). In addition, the antistatic component of the coating quickly removes static electricity from the surface of the disk and prevents dust from sticking during its use and storage (corresponding tests were carried out in a dusty chamber for 24 hours). The oil-repellent surface reduces the risk of data loss if someone accidentally touches the drive surface and makes fingerprints easier to wipe off (see Figure 3). Hardcoat DVD fully complies with standards for all performance characteristics and remains highly stable during testing at elevated temperatures and humidity (temperature 800C, relative humidity 90%).


Tests conducted by ECMA International confirm that robotic libraries based on certified hard-coated DVDs provide reliable storage of archival data for 30 years and fully meet archival storage standards.

IMPROVING STORAGE TECHNOLOGIES

The problem of archival storage is becoming more and more relevant as the volume of stored data increases, growing like an avalanche. Globally, the amount of archival information is growing much faster than all other information. At the same time, quick access is required only to 20-30% of information. By 2010, its total volume will reach one zettabyte, i.e. 1021 bytes.

Currently, DVD allows you to store 9.4 GB on a single media, and drives based on Blu-ray technology - up to 50 GB on a single BD disc. In the coming years, it is planned to increase the capacity of commercially produced optical disks to 100 GB, and in the future to 200 GB (see Figure 4). This will make optical technologies even more accessible.


Continuity of technology is important: modern optical drives support CDs released
25 years ago. In the future, the form factor of optical disks will not change, which allows us to count on the compatibility of optical disks with future storage devices.

BLU-RAY TECHNOLOGY

Modern Blu-ray optical technology provides high-density archiving on media with a capacity of 25 or 50 GB each; in the future, a capacity of 100 and even 200 GB is achievable. Single-sided media can have one or more recording layers of 25 GB each, support write-once (BD-R) and write-repeat (BD-RE), and provide highly efficient sector error correction. The Blu-ray disc has a diameter of 120 mm and a hard surface.

Blu-ray drives are read/write compatible with CD/DVD media. The technology is supported by all major drive and media manufacturers, as well as the UDF file system. Modern Blu-ray drives provide 2x write speed (72 Mbit/s) and 5x read speed (for single-layer media).

USING ARCHIVED DRIVES

Archive drives are used in the enterprise information system infrastructure when long-term, reliable data storage is required (see Figure 5). The management software automatically migrates data from the network or server according to predefined rules. It is estimated that approximately 80% of the data stored on level 1 storage media does not require frequent access, and 20% of it will never be needed. It makes sense to store such data on optical archival drives, thereby freeing up expensive RAID disk space.


When choosing an archival storage system, preference should be given to optical technologies DVD and BD. Only they ensure the fulfillment of all requirements for storage, including such parameters as high reliability and long-term storage, authenticity and immutability of data, fast random access to data, high storage capacity, and expandability. Optical technologies have been proven over decades and thousands of installations around the world.

Igor Korepanov is the Marketing Director of the Electronic Archive company. He can be contacted at the following addresses: and http://www.elar.ru.

“He who owns information owns the world,” said the great Chinese philosopher Confucius 3,000 years ago. But the incoming information had to be stored somewhere, summarized, found at the right time and used in time. From the beginning it was rock slabs and walls where ancient people painted, then it was clay tablets, birch bark, papyrus, paper, vinyl discs, magnetic tape, punched cards, floppy disks, disks and finally, a hard drive.
But when a hard drive is located in a system unit, it is always exposed to various types of dangers that can lead to the loss of recorded information. And this is due to voltage drops and overheating. And the nature of the information itself may be such that it is not always needed in real time, but it is still relevant for the owner. A catalog of family photos, favorite music, and also movies, which, of course, were downloaded from a free torrent. It is advantageous to store such information not in a hard drive, but on an external hard drive. And you can carry it with you, and always at hand, and there is less danger of losing information.
I was scheduled to go on a business trip to one of the southern republics of Russia for a period of 1 year. Naturally, I went to buy an external hard drive to upload photos of family, friends, favorite rock band concerts, and of course movies, a lot of movies! The main criteria were 1 TB capacity, appropriate transfer speed, and durability. As a result of consultations with the store manager, I chose the “Western Digital WDBBJH0010BBK” HDD drive.

During the operation of this device, its advantages became clear: it corresponds to the declared speed. USB 3.0-2.0 support: works great with any standards. Design: not huge, with rounded edges, top and bottom surface does not leave fingerprints. Noise: It's quiet for me. Vibration: Moderate (not too bad for me), rubber footpegs on the bottom reduce vibration by 80%. Omnivorous: the TV received the disc without problems, the computer also accepted it without problems.

However, later, seemingly insignificant, but still significant shortcomings began to appear. And the main thing is having your own power cord. Just imagine, each of you at your workplace at home or in the office is connected to a surge protector: a monitor, a system unit, active speakers, a printer, a modem, a router, and some other very necessary and required power supply crap. But there is no space for my external drive. At the same time, the other nearest outlet is 5 meters away in the opposite wall. We’ve plugged it in, we’re working, but the cord is hanging halfway across the room—we’re stumbling! Further, when the device was operating, noise began to be heard and vibration appeared. Of course, when the TV is on in the room, there are people, you don’t pay attention to it, but if there is only one left, all these sounds begin to irritate beyond a child’s level! And the size of the disk began to cause confusion - after all, it is large. You can’t put it on an LCD monitor, but it’s in the way on the desktop, the USB cord is short, the power cord is stretched.
It's no secret that a business trip is an outlet for a man. A business trip for 1 year is a complete outlet at first, and then stressful. Since we are not saints, we spent our free time from work watching the next new Hollywood masterpiece together and drinking alcoholic beverages of various strengths. So, on one of these free days, a cheerful friend touches my external drive with his hand, and the device falls from a height of 1.5 meters to the floor. Although the height was not great, I was already tense! It looks like Lenolium, the height seems to be small. I'm starting to connect the drive with confidence. The computer and TV do not see it when connected. Hoping to fix it, I removed the outer casing and, to my great surprise, found inside a standard hard drive, the relatives of which are inserted into the system unit. Why does Western Digital make a portable drive out of standard external drives? This is still a secret to me.

Personal attempts to start the disk did not lead to anything. Having handed over the device to their comrades from department “K” of the Ministry of Internal Affairs (they deal with computer crimes) in the hope that they would fix it, the latter were also unable to restore not only the functionality of the disk, but even the information on it. It turned out that when it fell, part of the reading mechanism separated and the surface of the disk itself was mechanically damaged. This is how family photos, favorite music and masterpieces of the Hollywood film industry were lost to me. And also 57 GB of high-quality porn.
Taking into account the bitter experience of using the previous hard drive and having spent a little time on the Internet, I made a purchase choice in favor of an external hard drive 1Tb Western Digital My Passport Essential Silver (WDBEMM0010BSL).


Its advantages: high-capacity external storage with low power consumption; The model automatically turns on and off along with the computer, and after several minutes of inactivity it goes into standby mode, which allows the drive to be used with both desktop computers and laptops. The kit includes an external power supply.

External hard drive 1Tb Western Digital My Passport Essential Silver (WDBEMM0010BSL) has a stylish and ergonomic case, small size and weight, which allows you to carry it in a small bag or pocket. It has proven to be a convenient and reliable secondary hard drive. Just connect it to a USB port. With 1 TB of memory you can store a huge number of files without saving space.
And as you can see, the main differences from the previous hard drive are smaller sizes, higher transfer speeds, the absence of its own power cord and, most importantly, the high reliability of the product under physical influence. This device is fully compatible with all technical devices and works without failures. But one thing is annoying - it also has a rotating element, and if you drop it, there is a high probability of losing the stored information.


Thank God that technological progress does not stand still, and therefore external solid-state SSD hard drives are already on sale. These drives are made on silicon, like flash memory. And there is nothing to break, since there are no moving parts. Neither heat, nor water, nor pressure can destroy information. In addition - small size, high speed of information transfer, long shelf life (manufacturers claim up to 1000 years). The only drawback of these SSD drives is their high cost. But that's it for now.
Samsung has especially succeeded in manufacturing SSD drives. External drives such as Samsung MZ-7PD256BW and Samsung MZ-7TE1T0BW
at the end of 2013 and at the beginning of 2014 they occupy a leading position in their characteristics, the main ones being speed, reliability and durability. They will save and retain information. With them you can definitely rule the world, as the ancient philosopher Confucius said.

2013-08-26T11:45:39+00:00

Andrey Samkov

What a pity that summer is coming to an end! Surely you have accumulated a whole suitcase of summer experiences - someone graduated from school or university, someone spent an amazing vacation on a paradise beach, someone got married, someone had a baby. I want to keep photographs and videos of such events in their original form for a long time, so that I can review them many years later. How to protect your “digital memories” from loss? Let's figure it out.

Computer storage

Obviously, this is the most unreliable way to save data, but, alas, this is what most of us do. Laptops look especially dangerous in this sense. Judge for yourself: we carry them with us everywhere, sometimes we connect to a variety of networks, both wired and wireless. This means that our laptop is always in the “risk zone”.

We can break it, they can steal it from us, no matter how sad it is to think about it. And “chaotic network connections” lead to the fact that our laptop computer is constantly running under the threat of infection by viruses, Trojans and other computer infections (and a reliable antivirus, unfortunately, is not a 100% guarantee, because the struggle between virus writers and virus fighters goes on continuously with variable success).

And there is no point at all if, in addition to problems with a computer that has stopped functioning, we would also have problems with the loss of valuable information.

Optical discs

External drives should not be kept constantly connected to the computer: it is no secret that even the best computers fail from time to time. It is especially dangerous when the breakdown is related to the power supply or power system; in this case, the damaged component threatens to “take with it to the grave” everything that will be connected to the computer, including your precious electronic archive.

At the same time, you shouldn’t hide the hard drive in a closet for years: the fact is that the information on it is stored in the form of magnetized areas, and this magnetization can (and will) be lost over time; in other words, the disk will be demagnetized, and the data, accordingly , get lost. So once every few months the hard drive should still be connected to the computer so that it restores the magnetization of its “pancakes”.

views