I intended 1.2 to be my “year-end” release but ended up cleaning up some things in the few days since, including one bug, so might as well call this 1.3 now.
A few pre-built binaries are again available in case that’s helpful.
I just tagged the release of dupd 1.2, enjoy hunting those duplicates!
This time I included pre-built binaries for a few platforms. Probably mostly useful on OS X for those without dev tools intalled.
Recently I’ve done a few performance improvements to dupd, motivated by one particular edge case file set I was working with a while back. That file set had very large numbers (over 100K) of files of the same size (these were log files from a production system where the content was always different but due to the structure of the files they tended to have the same size). This was a worst case scenario for dupd given the way it grouped files of the same size as potential duplicates. With the latest changes (in dupd 1.2) this scenario is dramatically faster (scan time reduced from about an hour to about five minutes – see below).
In more common scenarios these improvements don’t make a big difference but there is still some small benefit. Memory consumption is also reduced in dupd 1.2 (there is more room to reduce memory consumption that I might play with if I have time some day).
In a nutshell, dupd 1.2 should be either no slower, slightly faster or in some edge cases dramatically faster than dupd 1.1.
The three main changes were:
That said, do these changes translate to any benefit on more “normal” file sets? Nowhere near as dramatically, but it’s still faster and uses less memory so that’s all good.
All the numbers above are from machines with SSDs. I also tested on a couple machines with traditional hard drives and there was zero change in performance. No graph, it’s just a straight line ;-)
With normal hard drives, the file I/O time so completely dominates run time that there is no difference from any dupd improvements.
(I suspect the edge case file set would have seen improvement even on spinning rust, but I didn’t have the chance to test that scenario.)
A new release of heliod is available, version 0.3.
Pre-built binaries for a few platforms are available: https://github.com/jvirkki/heliod/releases/tag/v0.3
The main driver for this release is that I needed a 64 bit build for debian, as I’ve been meaning to upgrade my server for a long while but was held back by the lack of a 64 bit Linux build in 0.2.
Randomly browsing the web tonight I came across this article on “What is the fastest way to find duplicate pictures?“. Nice to see the author concluded that:
"Dupd was the clear speed winner"
I’m glad it has worked well for others!
It’s unfortunate to downgrade from mercurial to git but overall should be for the better.
I also copied the release-0.2 binaries (built on debian-6, Solaris 10×86 and Solaris 10 SPARC) to the github release files: https://github.com/jvirkki/heliod/releases
If for some reason you want to download the release-0.1 binaries, they are still available on sourceforge here: http://sourceforge.net/projects/heliod/files/
Linking these here for my future reference…
50-39-30 chainring, 12-30 Ultegra cassette (10 speed)
Speed @90 rpm
| 12 13 14 15 17 19 21 24 27 30 ----+------------------------------------------------------------ 50 | 29.3 27.0 25.1 23.4 20.6 18.5 16.7 14.6 13.0 11.7 39 | 22.8 21.1 19.6 18.3 16.1 14.4 13.0 11.4 10.1 9.1 30 | 17.6 16.2 15.0 14.0 12.4 11.1 10.0 8.8 7.8 7.0
32 chainring, 12-36 cassette (9 speed)
| 12 14 16 18 21 24 28 32 36 ----+------------------------------------------------------ 32 | 18.6 15.9 13.9 12.4 10.6 9.3 8.0 7.0 6.2
| 10 12 14 16 18 21 24 28 32 36 42 ----+------------------------------------------------------------------ 30 | 21.8 18.2 15.6 13.6 12.1 10.4 9.1 7.8 6.8 6.1 5.2
Recently I’ve been doing some duplicate cleanup again and while at it I added a few features to dupd and called it version 1.1. So this is as good time as any to revisit the previous numbers.
I tested a small subset of my file server data using six duplicate detection tools:
The graph shows the time (in seconds) it took each utility to scan and identify all duplicates in my sample set. I’m happy to see dupd took less than half the time of the next fastest option (rdfind) and just over seven times faster than fdupes.
The sample set is 18GB in size and has 392,378 files. There are a total of 117,261 duplicates.
I ran this on my small home server, which has an Intel Atom CPU S1260 @ 2.00GHz (4 cores), 8GB RAM, Intel 520 series SSD.
For each tool, first I ran it once and ignored the time, just to populate file caches. Then I ran it five times in a row. Discarding the fastest and slowest time, I averaged the remaining three runs to come up with the time shown in the graph above. For most of the tools, the scan times were very consistent from run to run.
dupd scan --path $HOME/data -q 13.31s user 15.94s system 99% cpu 29.533 total dupd scan --path $HOME/data -q 13.17s user 16.09s system 99% cpu 29.539 total dupd scan --path $HOME/data -q 13.17s user 16.13s system 99% cpu 29.572 total dupd scan --path $HOME/data -q 13.28s user 16.04s system 99% cpu 29.604 total dupd scan --path $HOME/data -q 13.59s user 15.74s system 99% cpu 29.605 total
rdfind -dryrun true $HOME/data 49.28s user 24.98s system 99% cpu 1:14.75 total rdfind -dryrun true $HOME/data 49.08s user 25.29s system 99% cpu 1:14.87 total rdfind -dryrun true $HOME/data 48.93s user 25.52s system 99% cpu 1:14.92 total rdfind -dryrun true $HOME/data 48.92s user 25.53s system 99% cpu 1:14.95 total rdfind -dryrun true $HOME/data 49.52s user 25.09s system 99% cpu 1:15.11 total
./rmlint -T duplicates $HOME/data 63.53s user 52.55s system 113% cpu 1:42.69 total ./rmlint -T duplicates $HOME/data 64.67s user 52.46s system 113% cpu 1:43.43 total ./rmlint -T duplicates $HOME/data 64.01s user 53.14s system 113% cpu 1:43.63 total ./rmlint -T duplicates $HOME/data 66.47s user 54.32s system 113% cpu 1:46.13 total ./rmlint -T duplicates $HOME/data 67.20s user 56.00s system 113% cpu 1:48.55 total
./findup $HOME/data 129.46s user 40.77s system 111% cpu 2:32.05 total ./findup $HOME/data 129.75s user 40.53s system 111% cpu 2:32.10 total ./findup $HOME/data 129.58s user 40.82s system 111% cpu 2:32.28 total ./findup $HOME/data 129.89s user 40.80s system 112% cpu 2:32.30 total ./findup $HOME/data 130.47s user 40.34s system 112% cpu 2:32.36 total
fdupes -q -r $HOME/data 43.16s user 170.29s system 96% cpu 3:41.87 total fdupes -q -r $HOME/data 43.39s user 170.24s system 96% cpu 3:42.07 total fdupes -q -r $HOME/data 42.88s user 170.87s system 96% cpu 3:42.13 total fdupes -q -r $HOME/data 42.73s user 171.24s system 96% cpu 3:42.23 total fdupes -q -r $HOME/data 43.64s user 170.83s system 96% cpu 3:42.86 total
I was unable to get any times from fastdup as it errors out with “Too many open files”.
The Internet is ablaze with talk about the OpenSSL vulnerability nicknamed Heartbleed (CVE-2014-0160). It is, arguably, one of the worst SSL vulnerabilities in recent memory given how trivial it is to exploit. Attackers can, without leaving any trace and with zero effort, read up to 64K of data from the server (or client) address space. What’s there will vary, but may, if you get (un)lucky include private keys, passwords or other sensitive info.
Of course, it is not an SSL protocol vulnerability. It is a bug in the OpenSSL implementation. Those of you (us) running the heliod web server have had nothing to do this week since heliod fortunately does not use OpenSSL (it uses NSS). It is a relief, after running around at work to address the Heartbleed vulnerability, that I don’t have to do anything to fix my personal web servers which wisely run heliod!
I’m not a compulsive upgrader, most of the time the latest release of something is barely any better (and too often a step backwards when the marketing people take over and just change things for the sake of change instead of allowing engineering to improve the product.. but that’s another topic). I tend to wait until upgrading is absolutely worth it.
On the photography front, I’ve been using my old Nikon D40 for the longest time. While it was the low end DSLR from Nikon at the time, it was quite good (perhaps better than Nikon intended, since a few of the following-generation DSLR were arguably a step backwards!)
Smaller nitpicks aside, my main wish for improvement has been high-ISO performance. But I used to take mostly scenic/nature pictures where it didn’t really matter that much (I could always use the tripod and a longer exposure, the mountains weren’t moving). Now with a fast moving toddler in the house it has become more limiting. I now need to use faster shutter speeds but the D40 just wasn’t cutting it when light was poor.
So I’ve now upgraded to a D7100. The high-ISO performance is very impressive!
The two pictures below were taken from the same place with the same light (ceiling lights at night in the living room) with the same lens (Nikon 35mm f/1.8), same settings (1/125s, f/1.8).
Here’s the D40 picture (ISO 800):
There’s a lot of other little niceties in the D7100 over the D40, but I could live without any of those. But this high-ISO performance is an awesome and badly needed upgrade!