Atari ST FD Information

 V1.0 - Last modified 14/01/2007 20:23

Désolé pas de version Française pour le moment

Atari ST Diskette Information

This page contains quite a lot of information related to the Atari ST diskettes: This includes information on the Floppy Disk Media (down to the flux level), the FD Drives, the FD controller, the FD copy protection mechanisms, the FD layouts , FD specific hardware solutions, etc ... The end goal is to help the understanding of the duplication (backup) of Atari ST diskettes (protected or not) and this should not be confuse with a preservation project like PASTI.

Backup Philosophy: a backup program should always do the most to ensure the integrity of the resultant copy. The copy produced should operate just like the original and not remove the protection, or modify the program being copied in any way. The backup program must do the up most to check that the copy produced is correct, with correct checksums and therefore dumb analog copiers should be avoided.

There are several Atari ST FD imaging formats for non protected diskettes (ST, DIM, MSA, ...) which are mainly used for emulation purpose but can also be used to recreate diskettes (backup).  There are also few imaging formats for protected diskettes (STT, STX, ...) which allow to run program on emulators, but it is not yet possible to recreate FD from these images. For example the PASTI Preservation project provide the capability to create disk images of almost any protected FD in STX format that can be run on SainT and STeem emulators but there is also a plan is to be able to recreate protected FD from the STX images in the future.

Note that in order to create a backup of most Atari copy protected FDs, special hardware is required because many of the protection mechanisms cannot be handled directly by the Atari FD controller. The best hardware solution for creating backup of Atari diskettes (protected or not) is the Discovery Cartridge from Happy Computer. It uses a specially designed IC that allows to work down at the flux level when necessary and therefore it can handle all possible Atari ST protection mechanisms.

One solution to make duplication of diskettes is:

  1. To create a disk image of the diskette using an open disk images format that can handle all known copy protection mechanisms used with Atari ST diskettes (note that this is similar to what the ATP working group is doing for Atari 8-bit disk images or SPS for the Amiga,...). Of course the STX format from PASTI is the format of choice as it provides the following benefits: it already exist, it is usable by several emulators, and there are already hundreds of STX images available ... A less attractive substitute to the STX format is the usage of the Discovery Cartridge dump format.
  2. To be able to create diskettes from the protected disk image format. As already mentioned this requires to use specialized hardware and the choice here is to use the Discovery Cartridge for that matter.

Table Of Content

Floppy Drives information

First a lot of useful information about floppy drives and floppy disks can be found in the excellent documents: The floppy user guide by Michael Haardt, Alain Knaff, David C. Niemi and  The Technology of  Magnetic Disk Storage  by  Steve Gibson.

Drive General Information

Information about micro floppy drives: Teac & Citizen Micro Floppy Disk Drive Spec, Shoreline Drive X1DE31U.

Below is a functional diagram of a typical floppy disc drive

Floppy Drive Read/Write Heads

The read/write heads on the floppy disk are used to convert binary data to electromagnetic pulses, when writing to the disk, or the reverse, when reading. They are energy converters: they transform electrical signals to magnetic signals, and magnetic signals back to electrical ones again. The heads on your VCR or home stereo tape deck perform a similar function, although using very different technology. The read/write heads are in essence tiny electromagnets that perform this conversion from electrical information to magnetic and back again. Each bit of data to be stored is recorded onto the hard disk using a special encoding method that translates zeros and ones into patterns of magnetic flux reversals. Conventional floppy disk ferrite heads work by making use of the two main principles of electromagnetic force. The first is that applying an electrical current through a coil produces a magnetic field; this is used when writing to the disk. The direction of the magnetic field produced depends on the direction that the current is flowing through the coil. The second is the opposite, that applying a magnetic field to a coil will cause an electrical current to flow; this is used when reading back the previously written information. More detail is given in the section Technical Requirements for Encoding and Decoding

There are several important differences between floppy disk and hard disk read/write heads. One is that floppy disk heads are larger and much less precise than hard disk heads, because the track density of a floppy disk is much lower than that of a hard disk. The tracks are laid down with much less precision; in general, the technology is more "primitive". Hard disks have a track density of thousands of tracks per inch, while floppy disks have a track density of 135 tracks per inch or less.

In terms of technology, floppy disks still use the old ferrite style of head that was used on the oldest hard disks. In essence, this head is an iron core with wire wrapped around it to form a controllable electromagnet . The floppy drive, however, is a contact recording technology. This means that the heads directly contact the disk media, instead of using floating heads that skim over the surface the way hard disks do. Using direct contact results in more reliable data transfer with this more simplistic technology; it is impossible to maintain a consistent floating head gap at any rate when you are using flexible media like floppies. Since floppy disks spin at a much slower speed than hard disks--typically 300 to 360 RPM instead of the 3600 RPM or more of hard disks--they are able to contact the media without causing wearout of the media's magnetic material. Over time, however, some wear does occur, and magnetic oxide and dirt builds up on the heads, which is why floppy disk heads must be periodically cleaned. Contact recording also makes the floppy disk system more sensitive to dirt-induced errors, cause by the media getting scratched or pitted. For this reason, floppy disks are much less reliable, overall, than hard disks.

The floppy disk also uses a special design that incorporates two erase heads in addition to the read/write head. These are called tunnel-erase heads. They are positioned behind and to each side of the read/write head. Their function is to erase any stray magnetic information that the read/write head might have recorded outside the defined track it is writing. They are necessary to keep each track on the floppy well-defined and separate from the others. Otherwise interference might result between the tracks.

All modern--and even not-so-modern--floppy disks are double-sided. Very, very old floppy disks originally were single-sided only (early Atari 520 STF). Since the disks are double-sided, there are two heads, one per side, on the drive. The heads contact the media on each side by basically squeezing the media between them when the disk is inserted. The heads for different drives vary slightly based on the drive format and density.

Sense, Amplification and Conversion Circuits

Since the signals read from the disk are very weak, special circuits are required to read the low-voltage signals coming from the drive heads, amplify them and interpret them, to decide if each signal read is a one or a zero.  See also Magnetodynamics & Drive ACG

Floppy Drive Ferrite Heads

The oldest head design is also the simplest conceptually. A ferrite head is a U-shaped iron core wrapped with electrical windings to create the read/write head--almost a classical electromagnet, but very small. (The name "ferrite" comes from the iron of the core.) The result of this design is much like a child's U-shaped magnet, with each end representing one of the poles, north and south. When writing, the current in the coil creates a polarized magnetic field in the gap between the poles of the core, which magnetizes the surface of the platter where the head is located. When the direction of the current is reversed, the opposite polarity magnetic field is created. For reading, the process is reversed: the head is passed over the magnetic fields and a current of one direction or another is induced in the windings, depending on the polarity of the magnetic field.

Magnetodynamics & Drive ACG

This section gives more information on magneto-dynamics and information comes from the SpinRight site. You may want to read this interesting document on the subject.

It is important to know that data pulses occurring near one another interact with one another. Here's what we mean:

As previously explained, data bits are recorded on a magnetic surface by reversing the direction of the magnetism at a location. This magnetic reversal event is later picked up when the drive is reading. This event is treated as a data bit from the drive. Since a reversal of the magnetic field means making it go in the opposite direction, the signal picked up when reading the disk alternates in polarity, with a positive pulse followed by a negative pulse, followed by a positive pulse, followed by a negative pulse, and so on ...

If the pulses were ideal, perfect and square, the result would look something like this:

Notice that the pulses alternate in direction and return to the center.

But reality is much more messy. The diagram on the left shows a single, perfect, idealized pulse like those above, and the diagram on the right shows the actual shape of the pulse that is read back by a magnetic read head:

Ideal Pulse

Actual Pulse

As you can see, the actual pulses are significantly rounded, are much broader, and even have a small overshoot near the end. This pulse breadth creates data pattern sensitivity because closely spaced pulses of opposite polarity overlap each other and partially cancel each other out.

Typical Worst Case Pulse Sequence

In the diagram above, the first two pulses are spaced very far apart from all others so that they do not overlap and will be read as maximum amplitude pulses. This tricks the drive's read amplifier into turning down its amplification gain (Automatic Control Gain) since it presumes a strong signal coming from the drive's read head. But unlike the first two pulses, the last three pulses are spaced as closely together as possible. Lets look at what happens when we change our view from "theoretical pulses" to "actual pulses" ...

Actual "Worst Case Pattern" Pulses Read Back From Disk

The big A and B pulses are read back with maximum amplitude because no other pulses are nearby. But the negative polarity D pulse, being tightly sandwiched in between the two positive polarity C and E pulses, gets pulled upward by its positive neighbors, so it isn't nearly as strong as it would normally be !

The first two isolated pulse set the drive's automatic gain control (ACG) to its minimum possible gain by creating maximum possible amplitude pulses. The next three pulses create a minimum-amplitude center pulse specifically designed to test the strength of the disk surface underneath the center pulse.

Floppy Disk Encoding/Decoding

Most of the information in this section are taken almost directly from Hard Disk Data Encoding / Decoding. You can also find interesting information about encoding in "RLL Technical Details" from Pete Holzmann

Technical Requirements for Encoding and Decoding

You might think that since there are two magnetic polarities, N-S and S-N, they could be used nicely to represent a "one" and a "zero" respectively, to allow easy encoding of digital information. Simple! Well, that would be nice, but as with most things in real life, it usually doesn't work that way.  ) There are three key reasons why it is not possible to do this simple 1-to-1 encoding:

  • Fields vs. Reversals: Read/write heads are designed not to measure the actual polarity of the magnetic fields, but rather flux reversals, which occur when the head moves from an area that has north-south polarity to one that has south-north polarity, or vice-versa. The reason the heads are designed based on flux reversals instead of absolute magnetic field, is that reversals are easier to measure. When the disk head passes from over a reversal a small voltage spike is produced that can be picked up by the detection circuitry. As disk  density increases, the strength of each individual magnetic field continues to decrease, which makes detection sensitivity critical. What this all means is that the encoding of data must be done based on flux reversals, and not the contents of the individual fields.
  • Synchronization: Another consideration in the encoding of data is the necessity of using some sort of method of indicating where one bit ends and another begins. Even if we could use one polarity to represent a "one" and another to represent a "zero", what would happen if we needed to encode on the disk a stream of 1,000 consecutive zeros? It would be very difficult to tell where, say, bit 787 ended and bit 788 began. Imagine driving down a highway with no odometer or highway markings and being asked to stop exactly at mile #787 on the highway. It would be pretty hard to do, even if you knew where you started from and your exact speed.
  • Field Separation: Although we can conceptually think of putting 1000 tiny N-S pole magnets in a row one after the other, in reality magnetic fields don't work this way. They are additive. Aligning 1000 small magnetic fields near each other would create one large magnetic field, 1000 times the size and strength of the individual components. Without getting too far into the details, let's just say that this would, in layman's terms, create a mess. )

Therefore, in order to encode data on the hard disk so that we'll be able to read it back reliably, we need to take the issues above into account. We must encode using flux reversals, not absolute fields. We must keep the number of consecutive fields of same polarity to a minimum. And to keep track of which bit is where, some sort of clock synchronization must be added to the encoding sequence. Considering the highway example above, this is somewhat analogous to adding markers or milestones along the road.

Idealized depiction of the way hard disk data is written and then read. The top waveform shows how patterns are written to the disk. In the middle, a representation is shown of the way the media on the disk is magnetized into domains of opposite direction based on the polarity of the write current. The waveform on the bottom shows how the flux transitions on the disk translate into positive and negative voltage pulses when the disk is read. Note that the pattern above is made up and doesn't follow any particular pattern or encoding method.

In addition to the requirements we just examined, there's another design limit that must be taken into account: the magnetization limits of the media itself. Each linear inch of space on a track can only store so many flux reversals. This is one of the limitations in recording density, the number of bits that can be stored on the platter surface. Since we need to use some flux reversals to provide clock synchronization, these are not available for data. A prime goal of data encoding methods is therefore to decrease the number of flux reversals used for clocking relative to the number used for real data.

The earliest encoding methods were relatively primitive and wasted a lot of flux reversals on clock information. Over time, storage engineers discovered progressively better methods that used fewer flux reversals to encode the same amount of information. This allowed the data to effectively be packed tighter into the same amount of space. It's important to understand the distinction of what density means in this context. Hardware technology strives to allow more bits to be stored in the same area by allowing more flux reversals per linear inch of track. Encoding methods strive to allow more bits to be stored by allowing more bits to be encoded (on average) per flux reversal.

Frequency Modulation (FM)

The first common encoding system for recording digital data on magnetic media was frequency modulation, of course abbreviated FM. This is a simple scheme, where a one is recorded as two consecutive flux reversals, and a zero is recorded as a flux reversal followed by no flux reversal. This can also be thought of as follows: a flux reversal is made at the start of each bit to represent the clock, and then an additional reversal is added in the middle of each bit for a one, while the additional reversal is omitted for a zero.

This table shows the encoding pattern for FM (where I have designated "R" to represent a flux reversal and "N" to represent no flux reversal). The average number of flux reversals per bit on a random bit stream pattern is 1.5. The best case (all zeroes) would be 1, the worst case (all ones) would be 2:

Bit Pattern

Encoding Pattern

Flux Reversals Per Bit

Bit Pattern Commonality In Random Bit Stream









Weighted Average



The name "frequency modulation" comes from the fact that the number of reversals is doubled for ones compared to that for zeros. This can be seen in the patterns that are created if you look at the encoding pattern of a stream of ones or zeros. A byte of zeroes would be encoded as "RNRNRNRNRNRNRNRN", while a byte of all ones would be "RRRRRRRRRRRRRRRR". As you can see, the ones have double the frequency of reversals compared to the zeros; hence frequency modulation (meaning, changing frequency based on data value).

FM encoding write waveform for the byte "10001111".
Each bit cell is depicted as a blue rectangle with a pink line representing
the position where a reversal is placed, if necessary, in the middle of the cell.

The problem with FM is that it is very wasteful: each bit requires two flux reversal positions, with a flux reversal being added for clocking every bit. Compared to more advanced encoding methods that try to reduce the number of clocking reversals, FM requires double (or more) the number of reversals for the same amount of data. This method was used on the earliest floppy disk drives, the immediate ancestors of those used in PCs. If you remember using "single density" floppy disks in the late 1970s or early 1980s, that designation commonly refers to magnetic storage using FM encoding. FM was actually made obsolete by MFM before the IBM PC was introduced, but it provides the basis for understanding MFM

Modified Frequency Modulation (MFM)

A refinement of the FM encoding method is modified frequency modulation, or MFM. MFM improves on FM by reducing the number of flux reversals inserted just for the clock. Instead of inserting a clock reversal at the start of every bit, one is inserted only between consecutive zeros. When a 1 is involved there is already a reversal (in the middle of the bit) so additional clocking reversals are not needed. When a zero is preceded by a 1, we similarly know there was recently a reversal and another is not needed. Only long strings of zeros have to be "broken up" by adding clocking reversals.

This table shows the encoding pattern for MFM (where I have designated "R" to represent a flux reversal and "N" to represent no flux reversal). The average number of flux reversals per bit on a random bit stream pattern is 0.75. The best case (a repeating pattern of ones and zeros, "101010...") would be 0.25, the worst case (all ones or all zeros) would be 1:

Bit Pattern

Encoding Pattern

Flux Reversals Per Bit

Bit Pattern Commonality In Random Bit Stream

0 (preceded by 0)




0 (preceded by 1)








Weighted Average



Since the average number of reversals per bit is half that of FM, the clock frequency of the encoding pattern can be doubled, allowing for approximately double the storage capacity of FM for the same area density. The only cost is somewhat increased complexity in the encoding and decoding circuits, since the algorithm is a bit more complicated. However, this isn't a big deal for controller designers, and is a small price to pay for doubling capacity.

FM and MFM encoding write waveform for the byte "10001111".
As you can see, MFM encodes the same data in half as much
space, by using half as many flux reversals per bit of data.

MFM encoding was used on the earliest hard disks, and also on floppy disks. Since the MFM method about doubles the capacity of floppy disks compared to earlier FM ones, these disks were called "double density". In fact, MFM is still the standard that is used for floppy disks today. For hard disks it was replaced by the more efficient RLL methods. This did not happen for floppy disks, presumably because the need for more efficiency was not nearly so great, compared to the need for backward compatibility with existing media.

Note that MFM encoding is sometimes called 1,3 RLL as the pauses run length, between pulse, are in the range 1 to 3 (see RLL Technical Details).

Atari Double Density Diskette Formats

The Atari ST uses the Western Digital WD1772 Floppy Disc Controller (FDC) to access the 3 1/2 inch (or to be more precise 90mm) floppy disks. Western Digital was recommending to use the IBM 3740 Format for Single Density diskette and to use the IBM System 34 Format for Double Density diskette. Actually the default Atari Format used by the TOS is slightly different (nearer to the ISO Double Density Format) as it does not have an IAM byte (and associated GAP), before the first IDAM sector of the track (see diagram below).
However the WD1772 ( and therefore the Atari) is capable to read both format without problem but the reverse is usually not true (i.e. floppies formatted on early Atari machines can't be read on PCs but floppies created on PC can be read on Atari).

IBM System 34 Double Density Format (this is the format produced on a DOS machine formatting in 720K)

ISO Double Density Format.

Atari Standard Double Density Format

Below is a detail description of the "Standard Atari Double Density Format" as created by the early TOS.
Note: There are not really any standard convention on the naming of the GAPS and to which level of details they must be decomposed. This document uses a GAP numbering scheme which is a combination of the IBM and ISO standards but provides more details for the GAP between the ID record and the DATA record. Usually only one gap is described between these two records but here it is decomposed into a post ID gap (Gap 3a) and a pre-data gap (Gap 3b) as this allow a more detail description, but of course they can be easily combined into one Gap 3. Not show in the diagram below when the floppy is formatted with an IAM (index address mark) the Gap1 is decomposed into two gaps: A post index gap (Gap1a) and a post IAM gap (Gap1b).

The tables below indicates the standard values of the different gaps in the standard Atari diskette with 9 sectors of 512 user data bytes. It also indicates the minimum acceptable values, as specified in the WD1772 datasheet, of these gaps when formatting non standard diskettes.

Name Standard Values (9 sectors) Minimum Values (Datasheet)
Gap 1 Post Index 60 x $4E 32 x $4E
Gap 2 Pre ID 12 x $00 + 3 x $A1 8 x 00 + 3 x $A1
Gap 3a Post ID 22 x $4E 22 x $4E
Gap 3b Pre Data 12 x $00 + 3 x $A1 12 x $00 + 3 x $A1
Gap 4 Post Data 40 x $4E 24 x $4E
Gap 5 Pre Index ~ 664 x $4E 16 x $4E

Standard Record Gap Value (Gap 2 + Gap 3a + Gap 3b + Gap 4) = 92 Bytes / Record
Minimum Record Gap Value (Gap 2 + Gap 3a + Gap 3b + Gap 4) = 72 Bytes / Record
Standard Record Length (Record Gap + ID + DATA) = 92 + 7 + 515 = 614 bytes
Minimum Record Length (Record GAP + ID + DATA) = 72 + 7 + 515 = 594

Standard 9-10-11 Sectors of 512 Bytes Format

Note that the 3 1/2 FD are spinning at 300 RPM which implies a 200 ms total track time. As the MFM cells have a length of 4 µsec this gives a total of 50000 cells and therefore about 6250 bytes per track.

The table below indicates possible (i.e. classical) values of the gaps for tracks with 9, 10, and 11 sectors.

Name 9 Sectors: # bytes 10 Sectors: # bytes 11 Sectors: # bytes
Gap 1 Post Index 60 60 10
Gap 2 Pre ID 12+3 12+3 3+3
Gap 3a Post ID 22 22 22
Gap 3b Pre Data 12+3 12+3 12+3
Gap 4 Post Data 40 40 1
Gap 2-4 92 92 44
Record Length 614 614 566
Gap 5 Pre Index 664 50 20
Total Track 6250 6250 6250

Respecting all the minimum value on an 11 sectors / track gives a length of: L = Min Gap 1 + (11 x Min Record Length) + Min Gap 5 = 32 + 6534 + 16 = 6582 (which is 332 bytes above max track length). Therefore we need to decrease by about 32 bytes per sector in order to be able to write such a track. For example the last column of the table above shows values as used by Superformat v2.2 program for 11 sectors/track (values analyzed with a Discovery Cartridge). As you can see the track is formatted with a Gap 2 reduced to 6 and Gap 4 reduced to 1 ! These values do not respect the minimum specified by the WD1772 datasheet but they make sense as it is mandatory to let enough time to the FDC between the ID block and the corresponding DATA block which implies that Gap 3a & 3b should not be shorten.  The reduction of gap 4 & 2 to only 7 bytes between a DATA block and the next ID block does not let enough time to the FDC to read the next sector on the fly but this is acceptable as this sector can be read on the next rotation of the FD. This has an obviously impact on performance that can be minimized by using sectors interleaving (explain below). But it is somewhat dangerous to have such a short gap between the data and the next ID because the writing of a data block need to be perfectly calibrated or it will collide with the next ID block. This is why such a track is actually reported as "read only" in the DC documentation and is sometimes used as a protection mechanism.
Of course you have more chance to successfully write 11 sectors on the first track (the outer one) than on the last track (the inner one) as the bit density gets higher in the later case. It is also important to have a floppy drive that have a stable and minimum rotation speed deviation (i.e. RPM should not be more than 1% above 300).

Standard 128-256-512-1024 Bytes / Sector Format

The table below indicates standard (i.e. classical) gaps values for tracks with sectors of size of 128, 256, 512, and 1024.

Name 29 sectors of 128 bytes 18 sectors of 256 bytes 9 Sectors of 512 bytes 5 Sectors of 1024 bytes
Gap 1 Post Index 40 42 60 60
Gap 2 Pre ID 10+3 11+3 12+3 40+3
Gap 3a Post ID 22 22 22 22
Gap 3b Pre Data 12+3 12+3 12+3 12+3
Gap 4 Post Data 25 26 40 40
Gap 2-4 75 77 92 120
Record Length 213 343 614 1154
Gap 5 73 76 664 480
Total Track 6250 6250 6250 6250

Interleaving: Normally the sector number is incremented by 1 for each record (i.e. there is no need to interleave with DD like it used to be with older FD) however sectors can written be in any order.

Western Digital WD1772 Information

This section contain few information about the inner working of WD1772 Floppy Disc Controller used in Atari ST machines.  More specifically we will be looking at the following blocks in the FDC

For more more information read the Western Digital datasheet. Also of interest is the programming information document as well as an interesting article on FDC from David Small.

FDC PLL Data Separator

As shown in the above diagram the WD1772 floppy disc controller has an internal PLL data separator unit which allow to separate the clock from the data with a certain frequency variation on the read data input. The function of the data separator is to lock onto the incoming serial read data. When lock is achieved the serial front end logic of the chip is provided with a clock which is synchronized to the read data. The synchronized clock, called Data Window, is used to internally sample the serial data. One state of Data Window is used to sample the data portion of the bit cell, and the alternate state samples the clock portion. Serial to parallel conversion logic separates the read data into clock and data bytes and feed them into the DSR (Data Shift Register)

But first how does the FDC differentiate clock bits from data bits?

With FM encoding this is easy as the clock is always sent ! Therefore whenever a bit is missing in the stream this indicates a 0 data bit.

As we have already explained with MFM encoding a clock is only added for two consecutive 0 data bits and it is therefore not possible to differentiate between clock and data bits on arbitrary sequence of bits. But during the sequence of 12 $00 bytes, before the Sync Bytes, (Gap 2 & Gap 3b) it is possible for the data separator to know the position of the clock bits after few of these bytes and eventually to shift by half position if not properly locked. Note that a sequence of $FF bytes would result in the same bit pattern but it would allow to locate the data bits instead of clock bits.

To support reliable disk reads the data separator must track fluctuations in the read data frequency. Frequency errors primarily arise from two sources: motor rotation speed variation and instantaneous speed variation (ISV). Note that a second condition, and one that opposes the ability to track frequency shifts is the response to bit jitter.

Jitter tolerance definition: The jitter immunity of the system is dominated by the data PLL's response to phase impulses. This is measured as a percentage of the theoretical data window by dividing the maximum readable bit shift by a  1/4 bitcell distance.

Locktime  definition: The lock, or settling time of the data PLL is usually designed to be 64-bit times (8 sync bytes). The value assumes that the sync field jitter is 5% the bit cell or less. This level of jitter is realistic for a constant bit pattern. Inter symbol interference should be equal, thus nearly eliminating random bit shifting.

Capture range definition: Capture Range is the maximum frequency range over which the data separator will acquire phase lock with the incoming data signal. In a floppy disk environment, this frequency variation is composed of two components: drive motor speed error and ISV (Instantaneous Speed Variation). Frequency is a factor which may determine the maximum level of the ISV component. In general, as frequency increases the allowed magnitude of the ISV component will decrease.

Unfortunately detailed information on the 3 important PLL parameters: jitter tolerance, locktime, and capture range are NOT provided for the WD1772.

Note that the IBM standard allow deviation of the rotation speed of the drive within less than 2% range and therefore the PLL in the FDC is suppose to tolerate a 4% variation from central frequency. In practice the PLL will cope with at least 10% variation for MFM encoding and 100% variation for FM encoding. It is therefore possible to write bit cell at a frequency between 225 to 275 KHz (i.e. 3.4µs to 4.4µs cell width) in MFM and still be able to read these bits correctly.

An excellent, and easy to understand, document on Phase-Locked Loops subject: A Control Centric Tutorial by Danny Abramovitch. Look also at Phase lock loop with application document.

I would like to clear a misconception about the PLL data separator. I have read things like "As the data separator drifts during gap block you will get trash data during a read track... and will accidentally lock into data bits ..." (from an excellent article on FDC from David Small). In other words it is said that the data separator gets out of sync during gaps because it's internal PLL clock drifts and that it needs a sync sequence to get back to sync. While it is true that the data separator often gets out of sync during gaps the reason is totally different. It is obviously not caused by a drift of the clock during a small GAP otherwise how would it be possible to read a data block of 1027 bytes ! No the reason comes from the fact that while the "address sectors" are written once during formatting (with the write track command) the data field are rewritten many times (with the write sector command) and the time it takes to the FDC to switch from "address sector matching" to "data sector writing" is certainly not precise at the level of a bit! Therefore this is the data sector that drifts in position on the track and not the data separator clock!

FDC Address Mark Detector

The purpose of the Address Mark Detector in the FDC is to be able to recognize an address mark in the flow of data received from the floppy drive. For that matter we also need a non ambiguous way to find the start of a byte in the flow of bits (data synchronization).

FM Address Marks

In FM encoding (not used by the Atari) synchronization is done by looking for an Address Mark with missing clock (normal bytes use an FF clock pattern)

ADDRESS MARK Data Pattern Clock  Pattern
Index Address Mark FC D7
ID Address Mark FE C7
Data Address Mark FB C7
Deleted Data Address Mark F8 C7

MFM Address Marks

In MFM encoding (used by the Atari) synchronization is done by searching for a sequence of special bytes.

There is a special data sequence encoded at the beginning of each sector (GAP2 & GAP4), with special hardware in the FDC to detect it:

  • First, there is a long string of zero's; a hardware 'zero detector' is enabled to look for it (at this point, it could as easily find a string of one's as a string of zero's, since they are identical when taken out of context). As we have already seen this allow to separate clock bits from data bits.
  • Second, special bytes are encoded that violates the MFM encoding rules: either $A1 or $C2 bytes are written, with a missing clock in one of the sequential zero bits. These 2 special bytes with missing clocks are called the Sync Bytes.  In practice there is a sequence of 3 consecutive Sync Bytes that should normally be followed by an Address Mark (IAM, IDAM, or DAM) as described in the track format.

In summary if a sequence of zeros followed by a sequence of three Sync Bytes is found, then the PLL (phase locked loop) and data separator are synchronized and data bytes can be read. The following table shows the Sync Bytes and the Address marks used by the WD1772 on the Atari

ADDRESS MARK Data Pattern Clock Pattern Missing clock between bits (1) Resulting Bit Sequence
Sync Byte (before IDAM or DAM) $A1 $0A 4 &5 0100010010001001 ($4489)
Sync Byte (before IAM) $C2 $14 3 & 4 0101001000100100 ($5224)
Index Address Mark (IAM) $FC      
ID Address Mark (IDAM) $FE      
Data Address Mark (DAM) $FB      
Deleted Data Address Mark (DDAM) $F8      

(1) I could not make sense of the meaning of "a missing clock between bits x & y" as provided in the FDC datasheet. For example if I take the bits in the order they are sent (i.e. MSB to LSB) it should encode $A1 as 0100010000101001 and $C2 as 0101000010100100 but obviously this is not the case! Note that this encoding would violate the 1,3 RLL rules by having a sequence of 4 consecutives 0 and therefore we would be warranted not to find this configuration in a stream of normal data and therefore we would not have the false sync byte pattern problem/bug during a read track command.

Note that with the WD1772 an $A1 sync byte is produced by sending a $F5 byte to the FDC during the command, a $C2 sync byte is produced by sending a $F6 byte, and that the 2 bytes CRC are written by sending one $F7 byte. Normally the $C2 sync byte is is only used before an IAM and therefore normally not used in standard Atari diskettes. However having an IAM records on a track (as formatted on a PC) is perfectly acceptable on an Atari.

MFM Sync Byte Pattern

Figures below depict the $A1 and $C2 bytes with and without missing clock (for simplicity the following representation depicts the flux reversal as ideal pulses):

$A1 with missing clock Normal $A1
$C2 with missing clock Normal $C2

False Sync Byte Pattern

During normal reading of a data sector the Sync Mark Detector of the WD1772 is disabled. However during a read track command the sync mark detector is active all the time. It has been said that the WD1772 has a bug that causes it to mistakenly find $C2 sync mark inside data (for example see "copy me I want to travel" from Claus Brod). Well in fact the Sync Mark Detector does it job perfectly ! it is just that the $C2 sync mark has not been chosen wisely as it is quite possible to find several sequences of bits (see what Gunstick says on the subject) that have the exact same pattern as the $C2 with missing clock!
The consequence is that if you have a sequence of bits that matches the $C2 sync byte inside a data block and if you read it with a read track command then the FDC synchronizes on this pattern and the following bits are shifted resulting in a totally different reading of this sector! This "feature" can be used as a protection mechanism to hide some information inside a data block (see sync character in data field).

For example if we encode the $x029 sequence we get the following result:

And as you can see we can find the sync byte $C2 with missing clock in this sequence of bits (shifted by a half cell) !

Note that there are many sequences that matches the $C2 sync byte (please refer to references here), but I have not been able to find any sequence that matches the $A1 sync byte (does not mean that it does not exist !).


CRC Logic in WD1772

The CRC generator used by the FDC uses the classical CCITT polynomial G(x)=x^16 + x^12 + x^5 + 1.
To understand hardware calculation of the CRC value, view the data as a bit stream fed into a cyclic 16 bit shift register. At each step, the input bit is xor’ed with bit 15, and this result is fed back into various places of the register. Bit 5 gets bit 4 xor  feedback, bit 12 gets bit 11 xor feedback and bit 0 gets feedback. All other bits simply get rotated e.g. bit 1 gets bit 0 on a clock edge (see figure below).

At the beginning, all flip flops of the register are set to 1 (initialized to $FF). After the last data bit is processed that way, the register it contains the CRC. For checking CRC, you do the same, but the last you feed to it is the CRC. If all is fine, then all bits will be 0 after. Since bytes are recorded with their highest bit first on floppies, they are also processed by the CRC register that way and the resulting CRC will be written with bit 15 being the first and bit 0 being the last to the floppy (big Indian). The CRC is processed beginning with the first $A1 byte which presets the CRC register, so the CRC of a typical ID record would be computed as CRC of
    " $A1, $A1, $A1, $FE, $00, $00, $03, $02 "
and have the value $AC0D. Where $AC will be the first CRC byte and $0D the second.

In summary the 16 bit CRC of the WD1772 is generated using the CCITT generator polynomial . It is initialized to $FF and the computation includes all characters starting with the first $A1 address mark and up to the CRC character. It is recorded and read most significant bit first.

Note-1: The WD1772 documentation is somewhat confusing on the subject of CRC computation. It is mentioned that, in MFM, the CRC is initialized by receipt of $F5 byte (this is also shown on the command flowchart). As three $F5 are sent to generate the three $A1 sync bytes, we may have thought that the CRC is reset each time and therefore that the CRC computation would start with the last $A1, but this is not the case.
Note-2: It is also possible to use $C2 as a sync byte, but in this case beware that the CRC is not reset. Therefore writing a sequence like " $C2, $C2, $C2 $FE $00 $00 $03 $02 " should be read correctly but should result in a wrong CRC (can be used for protection as described here).

Example of CRC Code

There are many articles and examples of CCITT-CRC code available on the net. I have selected this one in French Le contrôle CRC by Julien Rosener and these ones  in English A painless guide to crc error detection algorithm and crctable.c by Ross Williams, crcccitt.c - a demonstration of look up table based CRC by Jack Klein, ...

But the code I prefer was exposed here by Simon Owen in the Atari forum.

"The basic code to compute the CRC is"

for (int i = 0 ; i < 8 ; i++)
  crc = (crc << 1) ^ ((((crc >> 8) ^ (b << i)) & 0x0080) ? 0x1021 : 0);

"Or use a normal CRC-CCITT look-up table for much faster computation:"

crc = (crc << 8) ^ crc_ccitt[((crc >> 8) & 0xff)] ^ b;

"Where the look-up table is the normal one generated from the 0x1021 polynomial using something like: "

for (int i = 0 ; i < 256 ; i++)
  WORD w = i << 8;

  for (int j = 0 ; j < 8 ; j++)
    w = (w << 1) ^ ((w & 0x8000) ? 0x1021 : 0);

  crc_ccitt[i] = w;

You can find here a small test program to run on PC, with source, based on the above code.

Atari ST Floppy Disk

Low-Level and High-Level Formatting

There are two steps involved in formatting magnetic media such as floppy disks and hard disks:

The first step involves the creation of the actual structures on the surface of the media that are used to hold the data. This means recording the tracks and marking the start of each sector on each track. This is called low-level formatting, and sometimes is called "true formatting" since it is actually recording the format that will be used to store information on the disk. This was described in this section. Once the floppy disk has been low-level formatted, the locations of the tracks on the disk are fixed in place.

The second formatting step is high-level formatting. This is the process of creating the disk's logical structures such as the file allocation table and root directory. The high-level format uses the structures created by the low-level format to prepare the disk to hold files using the chosen file system. In order for the TOS to use a diskette it has to know about the number of tracks, the number of sectors per tracks, the size of the sectors and the number of sides. This information is defined in a special sector called the boot sector. Beyond that it is necessary for the TOS  to find information (e.g. location of all the sectors belonging to this file, attributes, ...) about any files stored on the diskette as well as global information (e.g. the space still available on the diskette). This information is kept in directories and FATs structures.

The boot sector

The boot sector is always located on track 0, side 0, first sector of the diskette. This sector is tested by the TOS  as soon as you change of diskette to find important information about the diskette (e.g. it contains a serial number that identify the diskette). Some parameters are loaded from this sector to be used by the BIOS and are stored in a structure called the BPB (Bios Parameter Block). Eventually the boot sector also contain a bootstrap routine (the loader) that allow to start a relocatable program a boot time (usually a TOS image).

The structure of the boot sector is described below (the grayed areas are stored in the BPB). Note that the Atari boot sector is similar with the boot sector used by IBM PC and therefore 16 bits words are stored using the low/high bytes Intel format  (e.g. a BPS = $00 $02 indicates a $200 bytes per sector).

Name Offset Bytes Contents
BRA $00 2 This word contains a 680x0 BRA.S instruction to the bootstrap code in this sector if the disk is executable, otherwise it is unused.
OEM $02 6 These six bytes are reserved for use as any necessary filler information. The disk-based TOS loader program places the string 'Loader' here.
SERIAL $08 4 The low 24-bits of this long represent a unique disk serial number.
BPS $0B 2 This is an Intel format word (low byte first) which indicates the number of bytes per sector on the disk (usually 512).
SPC $0D 1 This is a byte which indicates the number of sectors per cluster on the disk. Must be a power of 2 (usually 2)
RESSEC $0E 2 This is an Intel format word which indicates the number of reserved sectors at the beginning of the media preceding the start of the first FAT, including the boot sector itself. It is usually one for floppies.
NFATS $10 1 This is a byte indicating the number of File Allocation Table's (FAT's) stored on the disk. Usually the value is two as one extra copy of the FAT is kept to recover data if the first FAT gets corrupted.
NDIRS $11 2 This is an Intel format word indicating the total number of file name entries that can be stored in the root directory of the volume.
NSECTS $13 2 This is an Intel format word indicating the number of sectors on the disk (including those reserved).
MEDIA $15 1 This byte is the media descriptor. For hard disks this value is set to 0xF8, otherwise it is unused on Atari.
SPF $16 2 This is an Intel format word indicating the number of sectors occupied by each of the FATs on the volume. Given this information, together with the number of FATs and reserved sectors listed above, we can compute where the root directory begins. Given the number of entries in the root directory, we can also compute where the user data area of the disk begins.
SPT $18 2 This is an Intel format word indicating the number of sectors per track (usually 9)
NHEADS $1A 2 This is an Intel format word indicating the number of heads on the disk. For a single side diskette the value is 1 and for a double sided diskette the value is  2.
NHID $1C 2 This is an Intel format word indicating the number of hidden sectors on a disk (not used by Atari).
EXECFLAG $1E 2 This is  a word which is loaded in the cmdload system variable. This flag is used to find out if the command.prg program has to be started after loading the OS.
LDMODE $20 2 This is  a word indicating the load mode. If this flag equal zero the file specified by the FNAME field is located and loaded (usually the file is TOS.IMG). If the flag is not equal to zero the sectors as specified by SECTCNT and SSSECT variables are loaded.
SSECT $22 2 This is  an Intel format word indicating the logical sector from where we boot. This variable is only used if LDMODE is not equal to zero
SECTCNT $24 2 This is  an Intel format word indicating the number of sectors to load for the boot. This variable is only used if LDMODE is not equal to zero
LDAADDR $26 2 This is  an Intel format word indicating the memory address where the boot program will be loaded.
FATBUF $2A 2 This is  an Intel format word indicating the address where the FAT and catalog sectors must be loaded
FNAME $2E 11 This is the name of an image file that must be loaded when LDMODE equal zero. It has exactly the same structure as a normal file name, that is 8 characters for the name and 3 characters for the extension.
RESERVED $39 2 Reserved
BOOTIT $3A 452 Boot program that can eventually be loaded after loading of the boot sector.
CHECKSUM $1FE 2 The entire boot sector word summed with this Motorola format word will equal 0x1234 if the boot sector is executable or some other value if not.

The data beginning at offset $1E (colored in yellow) are only used for a bootable diskette. To recognize that a diskette is bootable the boot sector must contain the text "Loader" starting at the third bytes and the sum of the entire boot sector should be equal to $1234.

The boot process is usually done in 4 stages:

  1. The boot sector is loaded and the boot program contained is executed.
  2. The FAT and the catalog are loaded from the diskette and the loader search for the file name indicated
  3. The program image is loaded usually starting with address $40000
  4. The loaded program is executed from the beginning.

See also some Boot sector code.

Directory Structure

The TOS arranges and stores file-system contents in directories. Every file system has at least one directory, called the root directory (also referred as the catalog in Atari), and may have additional directories either in the root directory or ordered hierarchically below it. The contents of each directory are described in individual directory entries. The TOS strictly controls the format and content of directories.
The root directory is always the topmost directory. The TOS creates the root directory when it formats the storage medium (high level formatting). The root directory can hold information for only a fixed number of files or other directories, and the number cannot be changed without reformatting the medium. A program can identify this limit by examining the NDIRS field in the BPB structure described in the boot sector section. This field specifies the maximum number of root-directory entries for the medium.
A user or a program program can add new directories within the current directory, or within other directories. Unlike the root directory, the new directory is limited only by the amount of space available on the medium, not by a fixed number of entries. The TOS initially allocates only a single cluster for the directory, allocating additional clusters only when they are needed. Every directory except the root directory has two entries when it is created. The first entry specifies the directory itself, and the second entry specifies its parent directory—the directory that contains it. These entries use the special directory names ". "(an ASCII period) and ".." (two ASCII periods), respectively.

The TOS gives programs access to files in the file system. Programs can read from and write to existing files, as well as create new ones. Files can contain any amount of data, up to the limits of the storage medium. Apart from its contents, every file has a name (possibly with an extension), access attributes, and an associated date and time. This information is stored in the file's directory entry, not in the file itself.

The root directory is located just after the FATs (i.e. on a single sided FD: side 0, track 1, sector 3 and on DS DF side 1, track 0, sector 3) and is composed of 7 sectors. Each entry in the root directory can be describe by the following structure:

Name Bytes Contents
FNAME 8 Specifies the name of the file or directory. If the file or directory was created by using a name with fewer than eight characters, space characters (ASCII $20) fill the remaining bytes in the field. The first byte in the field can be a character or one of the following values:
  • $00 The directory entry has never been used. The TOS uses this value to limit the length of directory searches.
  • $05 The first character in the name has the value 0E5h.
  • $2E The directory entry is an alias for this directory or the parent directory. If the remaining bytes are space characters (ASCII 20h), the SCLUSTER field contains the starting cluster for this directory. If the second byte is also 2Eh (and the remaining bytes are space characters), SCLUSTER contains the starting cluster number of the parent directory, or zero if the parent is the root directory.
  • E5h The file or directory has been deleted.
FEXT 3 Specifies the file or directory extension. If the extension has fewer than three characters, space characters (ASCII $20) fill the remaining bytes in this field.
ATTRIB 1 Specifies the attributes of the file or directory. This field can contain some combination of the following values:
  •  $01 Specifies a read-only file.
  •  $02 Specifies a hidden file or directory.
  •  $04 Specifies a system file or directory.
  •  $08 Specifies a volume label. The directory entry contains no other usable information (except for date and time of creation) and can occur only in the root directory.
  •  $10 Specifies a directory.
  •  $20 Specifies a file that is new or has been modified.
  •  All other values are reserved. (The two high-order bits are set to zero.) If no attributes are set, the file is a normal file.
RES 10 Reserved; do not use.
FTIME 2 Specifies the time the file or directory was created or last updated. The field has the following form:
  •  bits 0-4 Specifies two-second intervals. Can be a value in the range 0 through 29.
  •  bits 5-10 Specifies minutes. Can be a value in the range 0 through 59.
  •  bits 11-15 Specifies hours. Can be a value in the range 0 through 23.
FDATE 2 Specifies the date the file or directory was created or last updated. The field has the following form:
  •  bits 0-4 Specifies the day. Can be a value in the range 1 through 31.
  •  bits 5-8 Specifies the month. Can be a value in the range 1 through 12.
  •  bits 9-15 Specifies the year, relative to 1980.
SCLUSTER 2 Specifies the starting cluster of the file or directory (index into the FAT)
FSIZE 4 Specifies the maximum size of the file, in bytes.

FAT Structure

The file allocation table (FAT) is an array used by the TOS to keep track of which clusters on a drive have been allocated for each file or directory. As a program creates a new file or adds to an existing one, the system allocates sectors for that file, writes the data to the given sectors, and keeps track of the allocated sectors by recording them in the FAT. To conserve space and speed up record-keeping, each record in the FAT corresponds to two or more consecutive sectors (called a cluster). The number of sectors in a cluster depends on the type and capacity of the drive but is always a power of 2. Every logical drive has at least one FAT, and most drives have two, one serving as a backup should the other fail. The FAT immediately follows the boot sector and any other reserved sectors.

Depending on the number of clusters on the drive, the FAT consists of an array of either 12-bit or 16-bit entries. Drives with more than 4086 clusters have a 16-bit FAT; those with 4086 or fewer clusters have a 12-bit FAT. As Atari diskette has always less than 4086 clusters the FATs on Atari diskettes are always 12-bit FATs.

The first two entries in a FAT (3 bytes for a 12-bit FAT) are reserved. In most cases the first byte contains the media descriptor (usually $F9F) and the additional reserved bytes are set to $FFF. Each FAT entry represents a corresponding cluster on the drive. If the cluster is part of a file or directory, the entry contains either a marker specifying the cluster as the last in that file or directory, or an index pointing to the next cluster in the file or directory. If a cluster is not part of a file or directory, the entry contains a value indicating the cluster's status.

The following table shows possible FAT entry values:

Value Meaning
$000 Available cluster.
$002-$FEF Index of entry for the next cluster in the file or directory. Note that $001 does not appear in a FAT, since that value corresponds to the FAT's second reserved entry. Index numbering is based on the beginning of the FAT
$FF0-$FF6 Reserved
$FF7 Bad sector in cluster; do not use cluster.
$FF8-$FFF Last cluster of file or directory. (usually the value $FFF is used)

Each file and directory consists of one or more clusters, each cluster represented by a single entry in the FAT. The SCLUSTER field in the directories structure corresponding to the file or directory specifies the index of the first FAT entry for the file or directory. This entry contains $FFF if there are no further FAT entries for that file or directory, or it contains the index of the next FAT entry for the file or directory. For example, the following segment of a 12-bit FAT shows the FAT entries for a file consisting of four clusters:

  •  $003 Cluster 2 points to cluster 3
  •  $005 Cluster 3 points to cluster 5
  •  $FF7 Cluster 4 contains a bad sector
  •  $006 Cluster 5 points to cluster 6
  •  $FFF Cluster 6 is the last cluster for the file
  •  $000 Clusters 7 is available
  •  ...

Note that if a cluster contains $000 it does not mean that it is empty but that it is available. This is due to the fact that when a file is deleted the data are not erased but only the first letter of the name of the file in the directory structure is set to $E5 and all clusters used by the deleted file are set to $000.

Atari disk images used by emulators

There are already a lot of sites dedicated to emulation of Atari ST... and therefore if you want to learn more on this subject you should first browse these sites. In order to run an emulator you need a TOS ROM not covered here. In order to run a program or a game on an emulator you need to have a disk image of this program/game. There are plenty of disk images that you can find on the Internet and on P2P networks.

Reminder on Program protection used with the Atari

Many commercial Atari programs/games have some sort of protection mechanism. As for other platform the protection mechanism has evolved over time from very simple to very sophisticated.  The main protection mechanisms used on the Atari computers fall into three categories:
  1. The first mechanism is requesting the user to enter information at the start of (and/or even many times during) the program/game. This information usually comes from "documents" that are difficult to reproduce (at least at time where scanner, color copy, etc.. where not available). For example the documentation would contain colored text that can only be read with special filtering glasses...
  2. The second mechanism is to use an hardware key (also called a dongle) that usually plugs usually into the Atari cartridge port. This was used widely for professional programs. For example most of the music programs from Steinberg uses this dongles.
  3. The third mechanism uses "copy protected disks". Many commercial software manufacturers used disk formats that can not be reproduced by a standard Atari system as a mean to protect their programs. By using special tricks their software could verify that the disks were not copied. This was done by usually by checking for some exotic formats when loaded, and the program would refuse to run if this information was not correctly reproduced. The copy protection mechanisms on Atari started with simple tricks that could be reproduced by specialized software to end up with very sophisticated mechanism that required special hardware (e.g. Blitz cable, Discovery Cartridge, ...)...
Note on usage of disk images for protected programs/games:
  • For protection of type 1 you need to enter the information from the documents,
  • For protection of type 2 there is currently (some people are working on imaging dongles ...) no solution  other than using cracked version of the programs,
  • For protection of type 3 you either need specific emulators like STeem with PASTI.dll  for running protected disk images (e.g. STX files) or you need special hardware to make perfect copy.

You should also know that there are a lot of images of "cracked programs" available from internet. In this case of course the protection mechanisms are removed and a normal disk images are fine, and it is possible to use these programs without documentation...?

Disk Image formats

There are five major Atari disk image formats used by recent emulators:
  • ST : Supported by all emulators, it is the most simple format since it’s a straight copy of the readable data of a disk. Created originally for the PacifiST emulator, it does not allow copying copy-protected disks.
  • MSA : An acronym for Magic Shadow Archiver, it is a format created on Atari by the compression program of the same name. This format, too, is supported by almost all emulators. It contains the same data as the ST format, only difference being that the data is compressed. A variation of the program on Atari allows saving the data without any compression. This result in an ST file but with an MSA header. A nice feature of the MSA program is that it allows to split an archive into multiple files, thus facilitating the transfer of large disk images on floppies.
  • DIM : A format created by the well known Atari copy program: "FastCopy Pro". The non-compressed version of this format contains the same information as the ST and MSA formats, but with a proprietary header. This format is also supported by most emulators.
  • STT : Recently created and developed by the creators of STeem Engine emulator, it is supposed to allow the copy of many original disks, including certain copy-protected games. It supports disks of various numbers of tracks that can be of different size as well as other details. At the moment it is only supported by STeem.
  • STX: Recently created by the PASTI initiative (Atari ST Imaging & Preservation Tools). The imaging tools can virtually create images of any ST disk including copy protected disks. The STX Images can be used by the STeem and SainT emulators.

Note also that most recent emulators like STeem can directly read zipped disk images. For example STeem can mount directly a zipped file (.zip) that contains a disk image of any of the supported disk image format.

Making Disk Images from ST Original Floppies

This section try to answer the question: I have some Atari floppies that I want to use with my favorite emulator...

  •  Making ST Disk Images (all the programs have to be run on a PC)
    For ST format you can use Makedisk (DOS), imgbuild (DOS), wfdcopy (Windows). Instruction can be found at the Makedisk tutorial site and Mr Nours site
    I have tested the three program successfully on simples non protected FD without problem. My favorite is wfdcopy which run under Windows with a very simple GUI.
    Although it should be possible to use the gemulator explorer to create disk images it did not worked for me.
  •  Making MSA Images (this program run on an Atari ST)
    I have made the tests with MSA II - V2.3. The process is straightforward : start the program specify the name and directory for the image, indicate if you want the file to be compressed or not and click the "Disk -> File" button ... and you are done. You should use compressed mode to get smaller disk images.
  •  Making DIM Disk Image (this program run on an Atari ST)
    First you need the FastCopy Pro version (version "no version", or 1.0 or 1.2 did work for me). Important if you are using the version without version number you must first select the "all" option from the get sectors choice, in V1.0 and 1.2 this choice is unavailable (always pre-selected to all). After that you need to click on the "image copy" button, then select the "read" button and enter the name of the file you want to create... This file should be directly readable by the emulators.
  •  Making STT Disk Images (The imaging program run on an Atari ST)
    Use the STeem Disk Imager that comes with STeem itself. Instructions are provided in the disk image howto.txt file. The STT image can be mounted by STeem only.
  •  Making STX Disk Images of copy protected disk (the imaging program run on an Atari ST)
    Instruction on making STX images can be found in the tutorial section of the AitpaSTi site.

Making ST Floppies from Disk Image files

This section try to answer the question: I have some interesting disk images that I would like to run on my real Atari...

  •  Making a floppy disk from a ST image (all programs to be run on a PC)
    You can use makedisk (DOS), or ST Disk (DOS), or wfdcopy (Windows). Instruction can be found at the Makedisk tutorial site and Mr Nours site
    Again using wfdcopy with its nice GUI is the easiest to use and therefore my favorite!
  •  Making a floppy disk from a MSA image (run on an Atari machine)
    I have made the tests with MSA II - V2.3. The process is straightforward : start the program specify the name and directory of the image, click the "File -> Disk" button ... and you should have your disk ready.
  •  Making a floppy disk from a DIM image (run on an Atari machine)
    First you need the FastCopy Pro version (version "no version", or 1.0 or 1.2). you need to click on the "image copy" button, then select the "Restore" button and enter the name of the image file...
  •  Making a floppy disk from a STT image
    As far as I know there is no direct way to create a disk from an STT image. However it is possible to first convert the STT image to an ST or MSA image using the MSA converter program, and from this converted image use one of the procedure described above.
  •  Making a floppy disk from a STX image
    It is not yet possible to create a protected disk from an STX image. The reason is that protected disk have on purpose "errors" that cannot be written directly by the Atari FD controller. However there is a project going on to be able to write these images with the "Discovery Cartridge" from Happy Computer.

Disk image utilities

As already mentioned above, if you deal with disk images there is one program you must have: the MSA converter that run under Windows. This program not only allow conversion between different image formats but it also gives useful information about the image content.
For information there are some older programs that run under DOS for st to msa conversion or from msa to st conversion. As well as two DOS programs to convert a PC disk to/from an ST disk.

Atari FD Copy Protection Mechanisms

This section deals with the floppy disk protection mechanism and has been moved to this page.

For a short presentation about the generic program protection used on Atari please refer to this paragraph.

Backup of Copy protected Disk

Backup Philosophy:

A backup program should always do the most to ensure the integrity of the resultant copy. The copy produced should operate just like the original and not remove the protection, or modify the program being copied in any way. The backup program must do the up most to check that the copy produced is correct, with correct checksums and therefore dumb "bit"/"analog" copiers should be avoided.

You will find here an Excel table (preliminary and no warranty) with about 950 entries of program/games diskettes and best way to copy them (for now software and blitz - DC info to be added)

Software Copiers

The Atari TOS provide a rudimentary backup program using a simple drag and drop procedure. However this duplication only works for "standard" TOS formatted floppies and will fail for anything not standard. This limitation was used by early copy protected disk using simple protection mechanism (e.g. non standard layout). Therefore many specially design software copiers were developed to bypass some of these early (and easy) protection mechanisms. Some of these backup program are pretty good at copying many games. Among the most effective one are: AC13A, AC12E, D SAPIENS, FASTCOPY 1, PROCOPY 1.5, STARCOPY, STCOPY20, STCOPY 7.65 (most of them plus more in this archive) and Fast Copy Pro. This Excel table can help you for selecting the appropriate software copier (no warranty).

Blitz Cable & Program

The Blitz solution is an hardware analog copier. It is good at copying many protected diskettes, but certainly not as good as a digital solution like the DC.

"BLITZ from AT YOUR SERVICE is a revolutionary new back-up system for the Atari ST computer. BLITZ uses ONLY a special cable and software to back-up your software at a speed and power unheard of before. There is NO internal wiring done to the computer. The BLITZ cable copies from Drive 1 out through the Computer printer port to drive 2 (You must have two drives to use BLITZ). It reads Drive 1 and writes Drive 2 at the same time. The time it takes a normal copy program to read a disk, the BLITZ reads and writes the disk in one pass. The BLITZ backs-up protected and non-protected disks in the same amount of time"...

This solution allows to backup many protected games, but fails on many others. Basically the blitz solution copy the analog data from one floppy drive directly to another floppy drive without using the FDC (the floppy drives are actually controlled through the parallel interface). Therefore it is suppose to handle many protection mechanisms that play with the bitcell timing (e.g. writing floppies with non standard drive speed, etc...). However this is not a good solution as it does a "blind analog copy" of the flux without performing any control or check and the process is very sensitive to drives speed and synchronization. Nevertheless it works in many cases even if the resulting image is certainly not a "perfect copy" (i.e. it is usually not possible to make copy of copy). If you want to try this solution you need to build a BLITZ cable and use the special BLITZ program (original disk). The following archive contains many other versions of blitz programs that I have collected (you will need to experiment as different version works better for different program - see also the Excel table).

Here is a quote on analog copier from Fiat of the SPS project:

Although the data stored on floppy disks is digital (being computer data) it is is stored in an analogue form. In hardware copiers the computer reads the disk by interpreting the bitcells making up the flux transitions as 0's and 1's, checks all checksums match and holds that data to be subsequently written. It is "refreshed" and so "new" every time it is written. However, in analogue copiers, there is no such buffering. They work by tightly synchronizing two drives and the signals send from one disk to another is a pure analogue signal. There is no checking of integrity (CRC, etc.), because the data is never actually "processed". This is the only practical way a consumer can try to copy protected disks and such a solution is cheap to develop and manufacture. A disk copy produced by such a process is slightly less "quality" than the master. If you keep making generational copies like this the copy gets worse until the bitcells can no longer "hold together". Unfortunately, since it is digital data the result is that you get errors (bits are mis-read and even "bit shift", that is, corrupt neighbor bits because of their change of value), and likely the game will not work any more. The trick to understanding the above is that what is recorded on floppy disks is not just the data, there are other sorts of information too. They can copy many density protections, just as long as the timing is not too strict (for example, the Amiga version of Rob Northen's Copylock usually fails, and the ST version is quite similar). They can also copy disks with variations in disk format. However, they cannot copy flakey bits (aka weak bits). You cannot blindly image this protection, because of the way it works. See here for more information: flakeybits. This type of protection looks so far very common on the ST and PC.

Here is a quote from Ijor on analog copier :

Disk analog copiers work very similarly as dubbing an audio tape. They just reproduce the signal from the source diskette into the destination one. The consequences of this are several:

  • The copy won’t be aligned to the index hole. Using matching drives from the same manufacturer will help a bit. But both drives can never be synced well enough. So any protection that relies on some kind of index alignment will fail. This includes software that can easily be copied with a software copier.
  • Any “soft” (recoverable) error will be reproduced on the destination and converted to a “hard” (unrecoverable) error. A soft error happens when you read a sector and get an error, but it reads ok after a retry. “Soft” errors are much more common than what people realize. You usually never note them because there is a lot of retry logic going on at different levels of the operating system. Other type of copiers, both software and hardware, will retry on any error and will usually recover from soft errors. But an analog copier will not, it can’t because it has no way of detecting the error in the first place.
  • No verification is performed. So errors produced when writing are not detected. Again, no verification is possible.
  • No filtering, adjustment or pre-compensation is performed. Take in mind that we are taking about a mechanical device and a magnetical medium. The signal you read is not exactly what was originally recorded. Digital devices, such as the FDC or a hardware digital copier, perform a lot of filtering that here is not possible.
  • Because no “digitalization” is performed, the signal is degraded further on each “generation” copy. After a small number of generations there is very little chance of getting a working copy. Third generation copies usually don’t work (a copy of a copy of a copy of an original). [more on signal degradation can be found here]

Personal note: I think that the term analog copier is a bit confusing as the interface of the FD drive is digital (i.e. TLL signal) and therefore we are not dealing with analog signals like it would be with an analog recorder (e.g. VCR). However, if clearly specified and understood, the term analog is acceptable to indicate the fact that the signals produced by the head of the reading device is not processed by a digital circuit (the FDC) but directly sent to the head of the recording deviceand therefore it is not "regenerated". Due to the nature of the analog signal coming from the head the shape (and therefore the timing) of the converted digital signal will be quite different from the original signal and will definitively degrade during multiple copies.
I disagree with the fact that analog copier cannot copy weak/flakey bits, actually they should be relatively good at that ...

Discovery Cartridge Hardware

The Discovery Cartridge is a hardware digital copier. It is the best solution to copy any kind of diskette on the ST.

Introduction to the discovery cartridge

Presentation from Happy Computers:

"The Discovery Cartridge's advantage over other disk copying or improvement software and hardware is the HART chip.  HART stands for "HAPPY ATARI ROTATING THING".  This custom made integrated circuit was designed by HAPPY COMPUTERS specifically to allow the full range of floppy disk reading and writing that could be needed with your Atari ST/MEGA computer.  The limitations of the floppy disk controller chip built into your computer disappear.  With the correct drive connected, and the proper supporting software, your computer can read and write virtually any floppy disk format.  Using the standard 3.5" floppy drives for the computer, the immediate benefits are the ability to backup disks that you cannot ordinarily backup, and read and write disks in the MACINTOSH format.  The Discovery Cartridge also allows a variety of options that fulfils the wish list of most Atari computer users, so far as the disk system is concerned."

To get knowledge of the DC first read this document "Introduction to the discovery cartridge, installation, etc.".

The DC look like this. It is connected to the Atari by first plugging the cartridge into the cartridge port, and to the FDC port through a direct floppy cable.

DC Technical information

The most important document to use the discovery cartridge is the dbackup document (that I have formatted for easier reading). It explains how to create the file to control the backup process (the dbkupcf.s BC file) and gives technical details on the mfmbtemp file produce by the dmfmbkup program.

The schematics of the discovery cartridge can be found here. The is extremely simple system is build around a custom chip (the HART chip). The DC had few options that allowed to add EPROM's, clock/calendar circuit and circuitry for extra floppies. The "basic" model (the one without option) is therefore limited to the HART chip, an inverter package, a crystal, a few passive components and is connected to the 68000 address and data bus through the cartridge port. Here is a picture of the basic board.

Note that the imaging process is not fully automated and requires from the user to provide information about the protection mechanisms used on the diskette to backup. Happy Computer was suppose to provide on request (and did provide) command files for making backup of specific diskettes. However the company is long time gone and there is a need to help user on the analysis process in order to create new command files (see programs below).

DC Control files

As already mentioned the backup is performed under the control of BC files (these files must be renamed to dbkupcf.s). The original BC file only had control for few games and programs. I have therefore collected several BC files for many games/programs in this archive. For unknown diskette you can use this program to help you create a new BC files.

TODO table with games/program and DC directive

DC Programs

The dmfmbkup program requires the DC hardware to run. Here is the original floppy v2.7 from Happy Computers (latest release) as well as few other non official releases of the dmfmbkup program. There is also this program written by Larry Layton that helps in writing BC file.

The mfmbtemp file produced during the backup process is a binary file and it is therefore not possible to directly look at the information inside this file. I have therefore decided to write a program that reads the binary file and produce a "dump" of the content. It is experimental and currently it has the following features:
  • It interprets all the codes ($0000 to $000F) and associated values of the mfmbtem file, and displays the contents of the tracks in hex and character mode.
  • But the most important feature of the program is that it fully decodes the MFM flux (stored in $000A records) received from the DC. For that matter the program internally mimics a FDC with a PLL data separator, an AM decoder, a DSR, a CRC generator/checker,... This allow to read in MFM flux and to display the decoded content of the GAPs, ID, and DATA fields with information on good/bad CRC, sector number, size, etc...
  • It also has the capability to display semi-graphically the flux from the DC cartridge along with the decoded data.
  • More info to come...
  • The program run on PC and it also run on Atari (but painfully slow). Any feedback is welcome.

Copyright and Fair Use Notice
This web site contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to help in the understanding of the Atari Computers. We believe this constitutes a 'fair use' of any such copyrighted material. The material on this site is accessible without profit and is presented here with the only goal to disseminate knowledge about Atari computers. Consistent with this notice you are welcome to make 'fair use' of anything you find on this web site. However, all persons reproducing, redistributing, or making commercial use of this information are expected to adhere to the terms and conditions asserted by the copyright holder. Transmission or reproduction of protected items beyond that allowed by fair use notice as defined in the copyright laws requires the permission of the copyright owners.

e-mail to Jean Louis-Guerin This page is maintained by DrCoolZic (Jean Louis-Guerin).
If you have any comments please send me an e-mail (remove _REMOVE_)