Photography Asked on August 31, 2021
Background:
For some years I created my company webpage (tree farm) using a compromise between image quality and size generally using 300-500 pixel images. This was less than optimal for desktops, but gave tolerable speed for mobile devices.
Home monitors are increasing in both contrast and pixel size, and mobile devices are also improving. However there is still a need for small images when you aren’t on a fast connection.
Good web pages are moving to the srcset tag — you give a bunch of versions of the image in different sizes, and let the browser pick which one it downloads.
This potentially a big win: I can include larger, high quality images for desktop, new images for tablet, medium resolution for smart phones on a high speed connection, and low resolution for phones on 3G. (There’s a lot of Alberta where 3G is the best you can do.)
Here’s the problem: My photo manager doesn’t track exported files. (AFAIK NO photo manager does.) The problem then is to look at an image in my image folder, and find the original in my library. With 2300 images currently in use doing this by hand is time consuming.
What ways are there to speed up finding and reprocessing these images?
This is a partial answer. I will add to it as I work out the details.
Exiftool (https://exiftool.org/) can be used to extract lots of interesting metadata tidbits from images.
Starting at my image directory:
exiftool -r -ImageSize -Model -DateTimeOriginal -ImageNumber .
======== ./Seedlings/Spruce_2+2_1.jpg
Image Size : 299x400
Camera Model Name : iPhone 4
Date Created : 2015:05:17
======== ./Seedlings/Spruce_2+2_2.jpg
Image Size : 267x400
Date Created : 2015:04:29 12:58:16
Note that all data isn't present for all images. One of the apps on my phone, "Night Camera" puts very little metadata in the image. Note also that sometimes I just get a date and not a full date/time. Turns out there are 12 time stamps. I changed the command above to -DateTimeOriginal as it seems to be best supported.
If it seems that I'm a bit nuts about dates, there's method to my madness: When I import images, they get renamed yyyy-mm-dd_hh:mm:ss so with some changing of : to - and space to _ I can get the actual original filename.
The -r in the example command will go down my image tree recursively. Save this to a file.
Step 2
Check it to see that every file has a meaningful date. I may decide to pull further info.
This is also my time to rationalize my folder structure. Over the 10 years I've had my web page, 'things jest grew'
My thinking right now is that I will export a version of each image as a 3000 x 2000 pixel jpeg. That's too large for now, but hey, Bill Gates said no one would need more that 640K ram in a PC.
Using ImageMagic I will then create a set of images from each large image reducing pixel dimensions by a factor of 1.414 (square root of 2) with a sharpening stage between. See http://www.controlledvocabulary.com/imagedatabases/downsampling.html for details about this.
ImageMagic is a suite of command line tools for image manipulation. The syntax is extensive, somewhat arcane, and it's generally not a tool for use on a single image. But if you need to work on a thousand or so images it saves a lot of time.
The same script I write to use ImageMagic will also write srcset snippets.
Right now, I write my webpages in Markdown, so an image looks like this:
<DIV class="picr">
![Typical Pine](/Images/Shelterbelt/Shelterbelt-3.jpg)
Caption goes here
</div>
This will get replaced with
<DIV class="picr">
[% INCLUDE srcset/Shelterbelt-3.sst]
Caption goes here
</div>
All that is manual work, but because the srcset file tree is separate, there will be continuity as I change each image. If a user has the older cached image, he's no worse off than before.
Intent is to go after the pages most frequently used by mobile first.
Answered by Sherwood Botsford on August 31, 2021
Partial answer as well, but perhaps helpful.
I understand your current issue as: you have multiple downscaled versions of an image, but cannot find the original image anymore.
Software that can look for visual duplicates of photos might be a solution. This previous question has more information on that.
Manual
For the future, if you want to stick to the manual process of creating different sized versions of an image you should be able to tackle this problem:
My photo manager doesn't track exported files.
If you use something like Lightroom as your photo manager, you can export various sizes and keep the original filename in the exported image.
E.g.: you have a photo called "IMG_1234.jpeg", you can then export a 256px, 512px and 1920px wide versions and automatically call them:
Command line applications like ImageMagick can do something similar.
Automatic
You mention that you use Markdown to create your website, so I assume you are using a static site generator. In that case you should be able to find a plugin which can handle generating images of different sizes automatically (i.e. responsive images).
This article explains a complete setup. Perhaps it's too much for your goal, but you could draw inspiration from it.
Answered by Saaru Lindestøkke on August 31, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP