Each frame (still image) is converted into a grid of pixels so we have a colour and intensity (brightness) for each point on the image. However with high definition TV (lots of pixels) and up to 60 frames per second, that’s a lot of data (too much to send over your home wifi). So we have to reduce the amount of data sent using something called compression. If you think about a typical frame of video, a lots of pixels will be the same (eg the sky) so we can find clever ways of sending areas of the frame with less data. Similarly, there is a lot of similarity between consecutive frames so we have ways of only sending the differences. Using compression we can reduce video streams from several gigabits per second to more like a megabit per second without noticeably reducing the quality.
Comments