Usually I find exiftool to be phenomenally fast, limited usually by the I/O.
I'm using this command in terminal (on OS X) to rename the GPX files I've collected into a standard format.
exiftool -d '%Y%m%d-%H-%M-%S' '-FileName<${GpxTrkTrksegTrkptTime;tr/ /-/;tr/:/-/;tr(/Z/)()d;}%-c.gpx' *.gpx
but it seems to take a while to go through files and rename them. Probably one file per second, sometimes one to three seconds per file. Files are not that large, only a few megabytes - maybe 10 meg for a long GPX session.
Is there something I'm missing to make it go 'faster'?
thx!
Parsing large XML files does take time. I can't think of any way around this since you need to parse the file to get GpxTrkTrksegTrkptTime.
- Phil
Is there a way to just get the first <time> /XML entry which is usually in the first few lines of entries and be done with it?
i.e.
<time>2021-11-07T22:57:10.000Z</time>
and use that as the key point for determining date?
There is an option to ignore namespaces in RDF/XML, but there is not such shortcut for plain XML.
I looked into adding this feature, but it would be a real pain because the XML parsing may be deeply nested, and it would be necessary to create a mechanism for aborting these deeply nested function calls. It could be done, but there would be a small speed penalty for normal use due to all the extra code I would have to add. At this point I don't think it would be worth it.
- Phil
Ok. Thank you for clarifying!
:)
One option you might try is using Xidel (https://www.videlibri.de/xidel.html) parse the file. I fiddled around and was able to extract the first timestamp with this command
xidel file.gpx -e "(//trkpt/time)[1]"
example output on an old gps file
C:\>xidel Y:\Data\dump\Text\Geotracks\2013-01-28_152504.gpx -e "(//trkpt/time)[1]"
**** Retrieving: Y:\Data\dump\Text\Geotracks\2013-01-28_152504.gpx ****
**** Processing: Y:\Data\dump\Text\Geotracks\2013-01-28_152504.gpx ****
2013-01-28T23:25:04.000Z
On the next batch of gpx renaming I'll give that a shot, thank you StarGeek. :-)