JMODEM

Last updated

JMODEM is a file transfer protocol developed by Richard Johnson in 1988. It is similar to the seminal XMODEM in most ways, but uses a variable-size packet in order to make better use of the available bandwidth on high-speed modems.

JMODEM uses variable-length records called blocks. These blocks start with 512 data-bytes and increase in length to a maximum of 8192 bytes per block. There is a 6-byte overhead associated with each block so the percentage of overhead starts at a fairly high 1.1 percent and decreases to a very low 0.07 percent as the transmission progresses. The block length will increase in 512-byte increments as long as there are no errors requiring retransmission. Should an error occur, the block-size is cut in half. This continues until the block-size is as short as 64 bytes.

JMODEM also included a basic RLE data compression system, which replaces strings of repeated characters with a counter. If a string of many similar characters are found, JMODEM sends a "sentinel byte" (hexadecimal 0xBB) followed by a two-byte number, followed by the byte to be repeated. JMODEM applied RLE on a block-by-block basis, as opposed to the file as a whole. Since many files were already compressed with systems like .zip, JMODEM only used RLE on blocks where it actually reduced the size of the block.

JMODEM is explained in some detail in the John Dvorak book Dvorak's Guide to PC Telecommunications. [1]

Related Research Articles

A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2.

PCX, standing for PiCture eXchange, is an image file format developed by the now-defunct ZSoft Corporation of Marietta, Georgia, United States. It was the native file format for PC Paintbrush and became one of the first widely accepted DOS imaging standards, although it has since been succeeded by more sophisticated image formats, such as BMP, JPEG, and PNG. PCX files commonly stored palette-indexed images ranging from 2 or 4 colors to 16 and 256 colors, although the format has been extended to record true-color (24-bit) images as well.

Run-length encoding (RLE) is a form of lossless data compression in which runs of data are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs. Consider, for example, simple graphic images such as icons, line drawings, Conway’s Game of Life, and animations. It is not useful with files that don't have many runs as it could greatly increase the file size.

Trivial File Transfer Protocol (TFTP) is a simple lockstep File Transfer Protocol which allows a client to get a file from or put a file onto a remote host. One of its primary uses is in the early stages of nodes booting from a local area network. TFTP has been used for this application because it is very simple to implement.

File Allocation Table (FAT) is a computer file system architecture. Originally developed in 1977 for use on floppy disks, it was adapted for use on hard disks and other devices. It is often supported for compatibility reasons by current operating systems for personal computers and many mobile devices and embedded systems, allowing interchange of data between disparate systems. The increase in disk drives capacity required three major variants: FAT12, FAT16 and FAT32. The FAT standard has also been expanded in other ways while generally preserving backward compatibility with existing software.

bzip2 compression software

bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm. It only compresses single files and is not a file archiver. It is developed by Julian Seward and maintained by Federico Mena. Seward made the first public release of bzip2, version 0.15, in July 1996. The compressor's stability and popularity grew over the next several years, and Seward released version 1.0 in late 2000. Following a nine year hiatus of updates for the project since 2010, on June 4, 2019 Federico Mena accepted maintainership of the bzip2 project.

In computing, tar is a computer software utility for collecting many files into one archive file, often referred to as a tarball, for distribution or backup purposes. The name is derived from "tape archive", as it was originally developed to write data to sequential I/O devices with no file system of their own. The archive data sets created by tar contain various file system parameters, such as name, timestamps, ownership, file access permissions, and directory organization. The command line utility was first introduced in the Version 7 Unix in January 1979, replacing the tp program. The file structure to store this information was standardized in POSIX.1-1988 and later POSIX.1-2001, and became a format supported by most modern file archiving systems.

Truevision TGA, often referred to as TARGA, is a raster graphics file format created by Truevision Inc.. It was the native format of TARGA and VISTA boards, which were the first graphic cards for IBM-compatible PCs to support Highcolor/truecolor display. This family of graphic cards was intended for professional computer image synthesis and video editing with PCs; for this reason, usual resolutions of TGA image files match those of the NTSC and PAL video formats.

Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk or USB flash drive for initial use. In some cases, the formatting operation may also create one or more new file systems. The first part of the formatting process that performs basic medium preparation is often referred to as "low-level formatting". Partitioning is the common term for the second part of the process, making the data storage device visible to an operating system. The third part of the process, usually termed "high-level formatting" most often refers to the process of generating a new file system. In some operating systems all or parts of these three processes can be combined or repeated at different levels and the term "format" is understood to mean an operation in which a new disk medium is fully prepared to store files. Some formatting utilities allow distinguishing between a quick format, which does not erase all existing data and a long option that does erase all existing data.

In computing, a block, sometimes called a physical record, is a sequence of bytes or bits, usually containing some whole number of records, having a maximum length, a block size. Data thus structured are said to be blocked. The process of putting data into blocks is called blocking, while deblocking is the process of extracting data from blocks. Blocked data is normally stored in a data buffer and read or written a whole block at a time. Blocking reduces the overhead and speeds up the handling of the data-stream. For some devices, such as magnetic tape and CKD disk devices, blocking reduces the amount of external storage required for the data. Blocking is almost universally employed when storing data to 9-track magnetic tape, NAND flash memory, and rotating media such as floppy disks, hard disks, and optical discs.

In computer programming, the term magic number has multiple meanings. It could refer to one or more of the following:

The Lempel–Ziv–Markov chain algorithm (LZMA) is an algorithm used to perform lossless data compression. It has been under development since either 1996 or 1998 by Igor Pavlov and was first used in the 7z format of the 7-Zip archiver. This algorithm uses a dictionary compression scheme somewhat similar to the LZ77 algorithm published by Abraham Lempel and Jacob Ziv in 1977 and features a high compression ratio and a variable compression-dictionary size, while still maintaining decompression speed similar to other commonly used compression algorithms.

dd is a command-line utility for Unix and Unix-like operating systems, the primary purpose of which is to convert and copy files.


Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.

XMODEM is a simple file transfer protocol developed as a quick hack by Ward Christensen for use in his 1977 MODEM.ASM terminal program. It allowed users to transmit files between their computers when both sides used MODEM. Keith Petersen made a minor update to always turn on "quiet mode", and called the result XMODEM.

Lynx is a file transfer protocol for use with modems, and the name of the program that implements the protocol. Lynx is based on a sliding window protocol with two to sixteen packets per window, and 64 bytes of data per packet. It also applies run length encoding (RLE) to the data on a per-block basis to compress suitable data.

The following tables compare general and technical information for a number of file systems.

Disk sector Logical or physical division of storage media

In computer disk storage, a sector is a subdivision of a track on a magnetic disk or optical disc. Each sector stores a fixed amount of user-accessible data, traditionally 512 bytes for hard disk drives (HDDs) and 2048 bytes for CD-ROMs and DVD-ROMs. Newer HDDs use 4096-byte (4 KiB) sectors, which are known as the Advanced Format (AF).

Fixed-block architecture (FBA) is an IBM term for the hard disk drive (HDD) layout in which each addressable block on the disk has the same size, utilizing 4 byte block numbers and a new set of command codes. FBA as a term was created and used by IBM for its 3310 and 3370 HDDs beginning in 1979 to distinguish such drives as IBM transitioned away from their variable record size format used on IBM's mainframe hard disk drives beginning in 1964 with its System/360.

A FAT file system is a specific type of computer file system architecture and a family of industry-standard file systems utilizing it.

References

  1. Dvorak, John C. (1989). Dvorak's Guide to PC Telecommunications . Osborne McGraw-Hill. ISBN   0-07-881551-7.