1. Never, never publish a screen shot in JPG (or JPEG, if you prefer) format. Always use BMP or something similar like GIF. The JPEG format will always "soften" the image and it will not look as "real" as it could. One would think a GIF file would be perfect, especially if the color palette used is the standard Windows palette.
2. Save continuous tone images (photographs) in JPEG format. Note: Start with the highest resolution originals if possible, preferably in a raw or bitmapped format if possible. Save the originals in a safe place and always work on (tamper with) copies. I know you know this, because I heard you say so on Zen-Tech.
3. On a Windows machine, you can always obtain a screen shot by pressing the Print Screen button. This saves an image to the clipboard in the resolution set by the graphics card. See the attachment, for example. It is an image of this screen with a resolution of 1280 x 960 pixels.
Note: The image you see in the attachment when opened has, when viewed on my screen, a bit less brightness and a slight greenish-gray hue added to it, more so than the original screen image. Why, you might ask? Good question. I'm not sure, but I *think* it probably has something to do with the path from the image to the screen. In the original case, the screen is driven by the graphics card directly from the OS. In the reproduced case, it is a view of an image generated from a file. It would seem that the brightness, hue and contrast values in the file would be the same as those generated by the OS, and since they are processed by the same graphics card, they would look alike…..but then this is a Windows environment. Who knows for sure how the system and the graphics card drivers handle the same data from different sources? So tell me, what does the image look like….white background or an odd hue to it? Also, keep in mind that the screen capture is taken from the output of the graphics card and then stored when saved. When viewed again, it must pass through the graphics card again…do you agree? If so, the card could muck it up. i haven't calibrated anything, but I'll bet an expert could tweak the graphics drivers so that this problem would go away. The NVIDIA driver offers color management options. Heck, I don't know.
4. Invest in a program like Adobe Illustrator to create line art; then save and use it in a vector-based format like PDF or EPS. These can be resized in the target application without loss of quality. You can export these as a bit-mapped file such as GIF or BMP, but why do it and lose valuable resolution?
5. Programs may be better behaved if you place images in them by reference rather than embedding the image in the file. Caution: Avoid OLE, especially if an XP OS has been upgraded to Service Pack 3. OLE is a failed concept. Note also that It's been a long time since I used PowerPoint 2003 and I have no newer version, so I don't know what image placement options are available in the various versions of the application.
6. You lose a generation of quality if you open a file in an editing application like Photoshop and then resave it as a compressed file over itself, even if you don't edit the file. This applies to MPEG and JPEG files of all types. However, view an image file or playing a music file in something like iTunes does no damage to the original file, as the data is only uncompressed for viewing or for your listening pleasure.
7. In Photoshop, for example, you can increase the PPI of an image (JPG, for example) without decreasing the number of pixels. The resulting picture will appear smaller when displayed on the screen but the file will be the same size in megabytes. The image quality in terms of inherent resolution will the same (no loss of pixel data), but of course as soon as it is re-saved (i.e., re-compressed) there will be a JPG) generation loss, the magnitude of which depends on the degree of re-compression. Note that one could argue that the resolution is higher if you do this, since the PPI is larger, but it's not true. The resolving power is the same; it's just smaller when displayed. Mind you, while all of this image adjustment in Photoshop was taking place, the DPI of the screen (its native resolution*) cannot be changed, but the graphics card resolution (The number of generated pixels in the H direction by the number of generated pixels in the V direction) are independently set at the discretion of the user using the Properties > Settings interface found by right-clicking on the desktop. These different interactions that affect the resolution of the image and the size of an image as seen on the screen or as printed are the reasons why it is important to talk about the different aspects of *resolution* using the proper units—or none at all. It's the only way to communicate accurately when giving advice or trying to understand what's going on when you are creating, processing, viewing or printing an image.
*This applies to both analog (CRT) and digital (LCD) displays and the type of interface (VGA, DVI, HDMI, etc.). However, if one is serious about it, they should obtain a high quality LCD display(s) with digital inputs and use a graphics card with a matching digital interface (or interfaces, in the case of dual-head, matching dual-monitor setups.) Furthermore, be cautious when selecting your LCD monitors. They should (if possible) always be driven by a graphics card whose resolution setting is the same as the native resolution of the display. However, if you choose a monitor with a native resolution of, say, 1600 x 1200 *and* if it is only a 20.1" monitor (measured diagonally), setting the graphics card to 1600 x 1200 pixels results in icons and text that are too small for comfortable viewing. My monitors are like this, so I have to set both graphics card drivers to 1280 x 960 in order to get reasonably-viewable results. This is a suboptimal setting that results in aliasing of the image due to the mismatch in resolutions (1280 x 960 versus 1600 x 1200). There is not a 1-to-1 correspondence between the image pixels and the monitor's display pixels. In my case, bigger is better only if talking about the physical screen size and not the native resolution of the monitor. A better approach when buying an LCD monitor for use with contemporary graphics cards would be to choose a monitor with a DPI rating of about 80 DPI rather than the 100 DPI that I have. In other words, a 20" monitor is easier on the eyes when looking at a typical windows screen if both the graphics card and the monitor's native resolution match and in the neighborhood of 1280 x 960. My monitors would be easier to use with most Windows applications if they were 24" monitors rather than 20.1" monitors. However…
…there is this important caveat! If you are just viewing or working with high resolution photographs, higher resolutions are always desirable. In other words, my 1600 x 1200 monitors with 100 dpi resolution (The actual viewing area is exactly 16" wide by 12" high, by the way. That's how the 100 dpi figure is derived.) display full-screen, 1600 x 1200 images in near-perfect quality.
End of Tidbits…