Data Representation: Sound and Video
Why This Matters
This lesson explores how sound and video are represented digitally, focusing on the processes of sampling, quantisation, and encoding. We will examine the factors affecting the quality and file size of digital audio and video, and understand the role of codecs in compression.
Key Words to Know
Digital Sound Representation: Sampling and Quantisation
Digital sound is created by converting continuous analogue sound waves into discrete digital data. This process involves two main stages: sampling and quantisation.
-
Sampling: An Analogue-to-Digital Converter (ADC) takes measurements of the analogue sound wave's amplitude at regular intervals. These measurements are called samples. The sampling rate determines how many samples are taken per second. A higher sampling rate (e.g., 44.1 kHz for CD quality) captures more detail of the original waveform, leading to higher fidelity sound but also a larger file size. According to the Nyquist-Shannon sampling theorem, the sampling rate must be at least twice the highest frequency present in the original analogue signal to accurately reconstruct it.
-
Quantisation: Each sampled amplitude value is then assigned a discrete numerical value. The bit depth (or quantisation resolution) determines the number of bits used to represent each sample. For example, an 8-bit depth allows 2^8 = 256 different amplitude levels, while 16-bit allows 2^16 = 65,536 levels. A higher bit depth provides a more accurate representation of the original amplitude, reducing quantisation error and improving sound quality, but again, increasing file size. The digital data can then be stored or transmitted.
Factors Affecting Digital Audio Quality and File Size
Several factors directly impact the quality and file size of digital audio. Understanding these is crucial for making informed decisions about audio encoding.
- Sampling Rate: As discussed, a higher sampling rate captures more points on the sound wave per second, resulting in a more faithful reproduction of the original sound and higher quality. However, doubling the sampling rate roughly doubles the raw file size.
- Bit Depth (Quantisation Resolution): A greater bit depth allows for a wider range of amplitude values to be represented, reducing quantisation error and leading to a more dynamic and accurate sound. Increasing bit depth also directly increases file size.
- Number of Channels: Audio can be recorded in mono (one channel) or stereo (two channels). Stereo sound provides a richer, more immersive experience but effectively doubles the file size compared to mono for the same sampling rate and bit depth. Multi-channel audio (e.g., 5.1 surround sound) further increases file size.
The uncompressed file size for a given duration can be calculated as: File Size (bits) = Sampling Rate (Hz) × Bit Depth (bits) × Number of Channels × Duration (seconds). This calculation highlights the direct relationship between quality parameters and storage requirements.
Digital Video Representation: Frames and Pixels
Digital video is essentially a sequence of still images, called frames, displayed in rapid succession to create the illusion of motion. Each frame is a digital image composed of a grid of tiny picture elements called pixels.
- Frame Rate: This refers to the number of frames displayed per second (fps). A higher frame rate results in smoother, more fluid motion. Standard cinematic frame rates are typically 24 fps, while television and gaming often use 30 fps or 60 fps. Lower frame rates can lead to choppy or jerky motion.
- Resolution: This describes the dimensions of the video frame in terms of pixels (e.g., 1920x1080 for Full HD). A higher resolution means more pixels per frame, providing greater detail and sharpness. However, higher resolution significantly increases the data required per frame, leading to larger file sizes.
- Colour Depth: Each pixel in a video frame is assigned a colour, represented by a certain number of bits (e.g., 8-bit, 16-bit, 24-bit). A higher colour depth allows for a wider range of colours to be displayed, resulting in more vibrant and realistic visuals. Similar to audio, higher colour depth increases the data per pixel and thus the overall file size.
Video Compression and Codecs
Raw, uncompressed video data can be enormous, making storage and transmission impractical. For example, a single frame o...
1 more section locked
Upgrade to Starter to unlock all study notes, audio listening, and more.
Exam Tips
- 1.Be able to clearly define and explain the terms: sampling rate, bit depth, frame rate, and resolution, and discuss their impact on file size and quality for both sound and video.
- 2.Understand the difference between analogue and digital signals, and the role of ADCs and DACs in the conversion process for sound.
- 3.Practice calculating the uncompressed file size of a sound file given its sampling rate, bit depth, number of channels, and duration. Remember to convert bits to bytes or kilobytes/megabytes.
- 4.Explain the necessity of compression for sound and video, and differentiate between lossless and lossy compression, providing examples of scenarios where each might be preferred.
- 5.Describe how a codec works in general terms, specifically mentioning inter-frame and intra-frame compression techniques for video.