-overwrite_original vs. -overwrite_original_in_place (speed and safety)

Started by philbond87, January 13, 2022, 01:16:52 PM

Previous topic - Next topic

philbond87

In one of my applications my testing shows that using -overwrite_original is approximately 2.4 x faster than -overwrite_original_in_place.
I suppose that stands to reason, if I'm understanding the difference.

It seems that one makes a copy, changes the original, then deletes the copy, while the other just performs the change to the original.
Is one any less safe to use than the other... perhaps in the case where there is glitch in the operation before the copy is deleted? It seems they are pretty much the same in terms of safety.

Thanks,
Phil

StarGeek

The -overwrite_original option creates a copy of the original with the specified changes.  If successful, it deletes the original and renames the copy. Or something along those lines

The -overwrite_original_in_place option creates a copy with the specified changes, then opens the original file and copies the info from the edited file into the original, then removes the edited copy.

Technically, I'd say that -overwrite_original is slightly safer, because if, for example, there was a power failure during -overwrite_original_in_place, you might end up with a corrupted file.

The only real reason to use -overwrite_original_in_place would be in cases where properties of the original file would be lost with a new copy.  As the docs say, things like Mac Finder tags would be lost without it.  Though I haven't tested it, I would also suspect that the same thing would happen with Windows Alternate Data Streams (ADS).  But those are rarely used, so it's probably not much of a problem.
"It didn't work" isn't helpful. What was the exact command used and the output.
Read FAQ #3 and use that cmd
Please use the Code button for exiftool output

Please include your OS/Exiftool version/filetype

philbond87

Thanks for that info. There are possible situations where the loss of Mac finder tags might be an issue.