The authors are not responsible for the content of any external sites linked to from digitalmultimedia.org
All material on this site is ©2009–2017 MacAvon Media and may not be reproduced without permission.
Answers to Exercises, Chapter 2
These are answers to the exercises in the 3rd edition of Digital Multimedia (published February 2009) only. Do not try to use them in conjunction with the 2nd edition.
Buy the complete book in PDF for GBP £19.75 or individual chapters for GBP £1.99 each from the authors' own Web site MacAvon Media. MacAvon Media PDF is DRM-free and can be read with any program that opens PDF.
- There is scope for a discussion topic here on whether any physical phenomenon changes in a truly continuous way, but here we can take the pragmatic approach that if we can measure values describing a phenomenon to an arbitrary degree of precision, the phenomenon is changing continuously, despite what may be happening at the sub-atomic level. Notice that this doesn't mean that everything in nature varies continuously. Some things, such as the number of pebbles on a beach, are naturally "quantized" – they can only vary by whole numbers.
In the sense we have just given, though, continuously changing phenomena abound in nature. Some values change over time, others over space, but many change with both. We have discussed a few examples below, but you should be able to think of many more.
- The colour and intensity of natural light at any point on the surface of the world varies continuously, leading to the variation in colour and brightness which we see and which can be recorded photographically. See Chapters 4 and 5 for a description of how light values are represented digitally. If resolution and colour depth are not high enough, pixellation and posterization may result, as described in the text.
- The contours of a landscape change continuously in space. (And – very slowly – over time.) We can represent the shape of a hillside, for example, as a set of values recording height above sea level at discrete points. If these values are digitized, small distinctions between heights may be lost. A practical consequence of this is that you cannot be sure which direction water will flow between two points.
- Water is itself a rich source of continuous change. All flowing water (e.g. rivers, streams, waterfalls, etc.) exhibits continuous movement and complex fluid dynamics. For the purposes of multimedia we can record the changing appearance and the changing sound of moving water, either independently or together. The visual record can currently only be recorded as a sequence of still bitmapped images, perhaps as digital video. This records and reproduces only a small part of the information that was present in the original, so the visual appearance of the original continuous phenomena cannot be accurately recorded. In fact a great deal of information is missing. The sound of moving water can also be recorded. If an analogue recording is made of the sound it will be continuously varying (to all intents and purposes), though it will not necessarily be an accurate record of the real sound. If the sound is recorded digitally or digitized, then at best a high number of samples of the sound can be recorded, which necessarily means that some information from the original phenomenon is missing, even if the human ear cannot detect it. If a low number of samples is used or the sound is heavily compressed, the quality will deteriorate and the recording will no longer sound like the original phenomenon.
Different considerations will apply if you try to describe flowing water scientifically instead of record its appearance. For instance, the velocity in a fast-flowing stream will vary continuously with position and over time. If you measure the velocity at a point, a digital representationw will be limited in its precision, which will affect the precision with which you could test any hypothesis about the nature of the flow.
- Change in barometric pressure is a continuous pheonomenon. Traditionally this was recorded by having a stylus trace a continually varying line on a piece of paper (or similar). This analogue method of recording captured the continuous change, even if it was not absolutely accurate. Storing a sequence of values for pressure cannot record the continuous nature of the change in the same way. (The same is true for seismic activity.) However, there is little advantage in the analogue representation, because the main purpose of measuring barometric pressure is to assist in weather forecasting. The complex calculations involved in modelling the atmosphere are not amenable to analytic solutions, so digitization is essential.
- Yes, because human hearing only extends to roughly 20 kHz and the Sampling Theorem tells us that all frequencies below 22.05 kHz can be reconstructed accurately from digitized data sampled at 44.1 kHz. (We won't go into the question of whether this necessarily means that the subjective quality of digital audio is necessarily more realistic than that of analogue recordings.)
- In all cases the wheels will appear to rotate backwards; it is only the speed of rotation that will change at the different projection speeds. It is the sampling rate – in this case, the rate at which the frames were photographed – which determines whether anti-aliasing will occur.
- The result is a general one: the number of distinct values that you can represent in n bits is 2n (you can prove this formally by induction), so if you double the number of bits, the number of distinct values that you can represent is squared. (22n = (2n)2.) Quantization levels are just one example.
- Please note the erratum to this question and only consider lossless compression for this exercise. The proof is usually expressed roughly as follows: As we saw in the previous question, n bits can represent 2n distinct values. In other words, there are 2N different files of length N/8 bytes. An algorithm that could compress all of them would have to map each of these different files to a shorter one, but the number of shorter files is 2N-1+2N-2+ ... +1, which sums to 2N-1. That is, there are not enough shorter files to map each of the 2N files of length N bits to a shorter one. Since we cannot allow two different files to map to the same compressed version if compression is to be lossless, it is not possible for any algorithm to compress all files.
The case of lossy compression would make a good discussion topic. For lossy compression, we need to have some idea of how much information loss is acceptable. We could, after all, produce an algorithm that compressed every single file to zero bits – provided we didn't mind losing all the information in the file. Can the counting argument just given be extended to provide a lower bound on the amount of data that must be discarded by an algorithm that always achieves compression? If so, is it possible to relate this bound to some measure of how much quality has been lost in the process?
- Here are six examples of significant differences between vector and bitmapped graphics. You may be able to suggest some others:
- Vector graphics can be scaled without loss of quality; bitmapped images cannot.
- In vector graphics we record the values necessary to specify position and other attributes of each shape or path. In bitmapped images we explicitly record a colour value for each pixel.
- Individual shapes and paths of a vector graphic can be selected; only areas of pixels with a boundary or that share some property can be selected in bitmapped images.
- Vectors can easily be transformed without loss of quality; bitmapped images cannot
- Vectors must be rendered for display; bitmapped images can be displayed directly.
- Vectors must be rendered before certain filters and effects that can be directly applied to bitmapped images can be applied, as described in Chapter 4.
(a) For architectural drawings you might need either vector graphics or bitmapped images. Line drawings of the elevations of buildings could be economically represented by vector graphics. Drawings which convey the architect's impression of what a building will look like may require to be bitmapped images if they are artwork which has areas of continuously varying tone or if they are scanned from an external source. Some architects' impressions may be created directly in a vector graphics program, however.
(b) Vector graphics could be used for botanical drawings of the simplified diagrammatic form, but any botanical images which included continuous variation of tone, such as delicate shading of colours on flowers and leaves, should be represented by bitmaps.
(c) Vector graphics are more suitable for pie charts. The charts are comprised of line and areas of flat colour which can be very efficiently represented by vector graphics and relatively easily generated from sets of data. Using vector graphics would also make it easier to edit the charts.
(d) Bitmaps at high resolution are most appropriate for the recording of fingerprints, as very subtle variations in tone and line need to be recorded as accurately as possible.
(e) The image format most suitable for a map of the world would depend on the nature of the map. A diagrammatic map with areas of flat colour for the different countries would be most efficiently represented in vector graphics unless fine detail was required in drawing the coastlines and borders, but a map with subtle shading showing the nature of terrain, for example, would be better represented by a bitmapped image.
(f) If by brain scan we mean an image which is obtained directly by a continuous scan of the brain, such as a magnetic resonance scan, then a bitmapped image will be required as the image will contain subtle changes of tone and colour. If, on the other hand, the image is an artificial visualization of data recorded from the brain, then this may be most efficiently represented by vector graphics.
(g) The image format most suitable for illustrations in a children's alphabet book would depend on the nature of the illustrations required and on the method used to create them: the fact that it is a children's alphabet book is not relevant. If the illustrations have the form of simple lines and areas of flat colour then vector graphics might well be suitable. Such illustrations could be created in a vector graphics program. However, if the illustrations are created either from photographs or from original artwork scanned into the computer then bitmapped graphics should be used in order to retain all the detail of variations in tone.
(h) High resolution bitmapped graphics would be required for an accurate reproduction of the Mona Lisa, or of any painting, owing to the amount of detail and very subtle variation of tone. All such reproductions are necessarily digitizations of "analogue" originals.
(i) Vector graphics is by far the most suitable format for simple cartoon characters, and is widely used for animation. It is highly inefficient to use bitmapped formats for images comprised of simple lines and areas of flat colour. This is particularly important when large numbers of images need to be played back fast, as in animation.
- There are two different kinds of circumstances in which video formats should be used for animation. First, any animation to be played back on a television set or any other device using video standards will necessarily have to be in a video format, regardless of how the animation was originally created. Even simple vector graphics cartoons will have to be saved in a video format to be broadcast or played back on TV. Second, any animation which is created using photographic techniques, with either a still image or a video camera, or which is rendered as bitmapped image frames (e.g. from a 3D animation program) will be best represented by a video format. All digital photographic techniques record bitmapped images. Video formats provide the most effective compression and efficient playback of sequences of bitmapped images at reasonable quality. At the present time, video formats still provide the highest quality images for moving pictures.
- Assume that we need only consider the actual picture data and can ignore data structure overheads and metadata. To begin with, we will also neglect the soundtrack and assume that no chrominance sub-sampling occurs. We will also assume that the frames are in standard definition CCIR 601 format, with non-square pixels. (HD is left as a supplementary exercise.)
For PAL, each frame is 720x576 pixels; each pixel occupies 3 bytes and there are 25 frames per second, so each second occupies 31104000 bytes. For NTSC, the frames are 720x480 and there are 30 frames per second, which gives exactly the same figure for each second. The full feature is 100x60 times this, which comes to 173.8GB. (1 GB = 1024x1024x1024 bytes.)
In practice almost all "uncompressed" video uses chrominance sub-sampling, which will reduce the figure by one third, to 115.9GB
Assume the soundtrack is stereo, sampled at 44.1kHz and 16 bits. Then each second of sound occupies 176400 bytes and 100 minutes adds an additional 0.98GB to the total – which is hardly significant compared to the picture data.
- Audio is represented digitally on CD – that is, it has been sampled and quantized and is stored as bit patterns. Only earlier formats such as vinyl records represent music in an analogue form. The distinction in the news report is therefore a false one.
- Perhaps this should be a practical task. The distinction you will find is that the news stories are designed for reading, and will use a much more uniform layout with a limited number of text fonts and a conventional grid-based layout. Bold face and large type are typically used as headlines to structure the page. Advertisements use fonts and layout in many different ways, primarily to attract attention. They are designed to convey a simple message quickly, not to be read. The study of this topic is a fundamental part of Graphic Design.
- In general, it would be undesirable to restrict interactive elements to a single platform or architecture. Using a compiler does this: the generated code only runs on one type of machine. Consider a Web page: if you had interactive elements that were implemented using compiled code, it would be necessary to have different compiled versions for each type of machine. The Web server would have to be able to determine reliably which machine you were using and choose the appropriate code to send. This is not feasible. Alternatively, the compiler could be built in to the Web browser, but this would add considerable complexity and would cause a delay in rendering the page while the code was being compiled. It is hard to see how conventional compilation could be performed in a browser without creating serious security problems.
Subject, but note that Dublin Core is intended to be used with a controlled vocabulary, not ad hoc tags. (
Descriptionis intended for an account of the content using full sentences. Some tags, especially place names, might best be stored in the
Discussion Topics: Hints and Tips
- Think about a vinyl record. Does not the accuracy with which the groove is cut and the accuracy with which a stylus can follow it "limit" the representation? What about a traditional printed photograph? Is the accuracy of the image not "limited" by the printing process and inks used, they way the ink spreads on the paper and so on? The Sampling Theorem quantifies the limits of digital representation, but for analogue representations we can only measure frequency response, dot gain, etc.
- You can consider the use of software that allows conventional (Western) keyboards to be used and other methods of input, such as handwriting recognition for Asian language characters.
- PDF reproduces appearance, unlike HTML which describes structure and is able to accommodate different displays and user agents. What differences follow from that? You should investigate "tagged PDF" before drawing too many conclusions.
- The IETF are primarily concerned with networks, so they know a lot about how routers and gateways can corrupt data that passes through them. They will also be aware that the real Internet includes many out-of-date devices and others that don't actually implement the standards correctly. Standards documents should be readable by everyone at any time, so shouldn't be tied to proprietary software formats. Do these considerations still suggest the use of plain ASCII?
Practical Tasks: Hints and Tips
- Hex editors typically allow you to change the contents of files. Doing so will often render them unusable, so if you want to test your understanding of how a particular file is constructed by altering it, be sure to make a copy and work on that. Never edit your only copy of a file in this way.
- If you don't use Bridge, you can install the (free) Image Magick package and use its
identifycommand with the
- verboseoption to see a listing of the metadata, though this may not include all the available information. Numerous utilities are available for Mac and Windows that allow you to view and edit metadata, especially Exif.
- Note that most digital cameras create JPEG images by default. If your camera is capable of it, save a photograph in an uncompressed format, such as TIFF, for this exercise.