Marc's Public Blog - Linux Hacking


All | Aquariums | Arduino | Btrfs | Cars | Cats | Clubbing | Computers | Diving | Dreamstate | Edc | Electronics | Exercising | Festivals | Flying | Halloween | Hbot | Hiking | Linux | Linuxha | Monuments | Museums | Oshkosh | Outings | Public | Rc | Sciencemuseums | Solar | Tfsf | Trips



>>> Back to post index <<<

2012/05/01 Handy tip to save on inodes and disk space: finddupes, fdupes, and hardlink.py
π 2012-05-01 01:01 in Linux
I've been rsyncing my linux machines on my disk server for the last 10 years, and while I've tried to save space by using the trick below, clearly it hadn't applied carefully everywhere, and it didn't consolidate files across backups from multiple servers.

For a single server, the trick to keep snapshots in history of your server backup without losing a lot of space, is to rsync to directory current and cp -al current oldbackup_20120501. This allows rsyncing to current, and keep oldbackup made out of hardlinks until current changes to something different.

While this served me well, turns out it wasn't perfect, there were some admin errors in the past, and duplicates across different servers backed up. So, I looked for dupe finders so that I can re-hardlink identical files after the fact.
The first thing I quickly found was that comparing all files with the same size was going to be way way too slow, so I had to limit the deduping to files that had different names, or the pool of files to dedupe would just be way too big.

  • apt-get install fdupes: has lots of options for recursive scanning, can delete, hardlink, or even symlink. I could not find how to tell it to only compare files with the same names.
  • http://code.google.com/p/hardlinkpy/ : it's in python, but it actually runs faster than fdupes for me, and has useful options to work on huge trees: hardlink.py -c -f -x options.txt -x Makefile dir1 dir2 dir3. Its one flaw right now is that it runs out of RAM on my 4GB system when run on 27 million files. To save on time for deduping system backups, it's useful to tell hardlinks.py to only compare files with the same name.
  • http://www.pixelbeat.org/fslint/ : I didn't try this one but it looked nice when you need a GUI.
  • http://svn.red-bean.com/bbum/trunk/hacques/dupinator.py : is a simple python script you can hack on if you just need to find dupes and act on them.
  • apt-get install hardlink (yes, another one). hardlink -v -f -p -t -x options.txt -x Makefile dir1 dir2. Mmmh, that one took so much memory on my 4GB server that within 20mn it was swapping hard.
  • hardlinks.py is my favourite for now, over several days of runs (afterall, there are many files to scan/compare), I've already saved 5,646,995 files and about 300GB, not bad :)


    More pages: July 2002 February 2004 March 2004 November 2004 April 2005 August 2005 January 2006 July 2006 August 2007 November 2007 January 2008 October 2008 November 2008 December 2008 January 2009 May 2009 July 2009 August 2009 September 2009 November 2009 December 2009 January 2010 March 2010 April 2010 June 2010 August 2010 October 2010 January 2011 July 2011 August 2011 December 2011 January 2012 March 2012 May 2012 August 2012 December 2012 January 2013 March 2013 May 2013 September 2013 November 2013 January 2014 March 2014 April 2014 May 2014 October 2014 January 2015 March 2015 May 2015 January 2016 February 2016 June 2016 July 2016 August 2016 October 2016 January 2017 September 2017 January 2018 March 2018 December 2018 January 2019 January 2020 May 2020 January 2021 September 2021 March 2023 April 2023 December 2023 June 2024 November 2024

    >>> Back to post index <<<

    Contact Email