Logo 
Search:

Unix / Linux / Ubuntu Forum

Ask Question   UnAnswered
Home » Forum » Unix / Linux / Ubuntu       RSS Feeds

Finding largest file

  Date: Dec 26    Category: Unix / Linux / Ubuntu    Views: 523
  

Is there an easy way to find the largest file in a file tree? Other
than trawling through each folder in turn?

Share: 

 

15 Answers Found

 
Answer #1    Answered On: Dec 26    

In Nautilus, go to View| Arrange Items| By Size.

 
Answer #2    Answered On: Dec 26    

But that only works in the current folder. What I'm after is something
that will give me the largest file in the folder *tree* - for example,
find the largest file under ~/ searching recursively through all sub
folders.

 
Answer #3    Answered On: Dec 26    

you can go to applications accessories disk usage analyzer. You can scan all
or any folders. It will show you the size of every file in order of sze.

 
Answer #4    Answered On: Dec 26    


I was hoping more for a terminal command that would do all the donkey
work for me :-(

Ah well, I'll just have to wait until I have the time to do it manually,
thanks, anyway, guys :-)

 
Answer #5    Answered On: Dec 26    

Well, if you don't mind a quick and dirty inefficient program, here's
something that I think does what you want. You cd into the top of the
directory tree you want to look at, for example "cd /home/abc".
Then run it "find_big_files.pl > /tmp/log_file". The results should be
dumped into the log file. Of course, you have to put find_big_files.pl
somewhere on your $PATH

It's very slow to output anything if you run it in a directory that
has many files/subdirectories. If you cd into / and run it, better
plan to leave it overnight, and I can't speak for how much memory it
might use.

It may have bugs, I wrote it about as fast as I could type. For the
same reason, it's not very efficient.

But it does seem to work.

Feel free to improve and/or share it, under any version of the GPL you prefer.

I'll bet that after I post this, someone will come up with a one-liner
solution. :-)

#!/usr/bin/perl

# find_big_files.pl
#
# abc 5/2008

use warnings;
use strict;

$|++;

my ($in_pipe);
my ($current_line);
my (%file_hash);
my ($key);
my ($entry);

unless (open ($in_pipe, "/usr/bin/find . -exec /bin/ls -ld {} \\; |"))
{
die "***ERROR*** Could not open pipe from find command, $!\n";
}

while ($current_line = <$in_pipe>)
{
chomp ($current_line);

$key = (split (/\s+/, $current_line))[4];


if ($key)
{
push @{$file_hash{$key}}, $current_line;
}
}

foreach $key (sort {$b <=> $a} keys %file_hash)
{
foreach $entry (@{$file_hash{$key}})
{
print "$entry\n";
}
}

close ($in_pipe);

 
Answer #6    Answered On: Dec 26    

A simple option is to search recursively through the dirs and pipe
into a sort.

Something like this?

ls -sSR --hide=d | sort -g -k 1


 
Answer #7    Answered On: Dec 26    

When I try it on this directory tree (just some files from recent
projects to make a small tree to work with):

abc@zaphod$ find . -print
.
./find_big_files.pl.13MY08
./baz
./baz/shirt_back_no_aflac.gif
./baz/NOTES
./baz/shirt_front.gif
./bar
./bar/find_big_files.pl.13MY08
./bar/find_big_files.pl
./bar/find_big_files.pl~
./bar/find_big_files.pl.dead_end
./find_big_files.pl
./find_big_files.pl~
./find_big_files.pl.dead_end
abc@zaphod$

This:

abc@zaphod$ ls -sSR --hide=d | sort -g -k 1

.:
./bar:
./baz:
total 16
total 24
total 36
4 bar
4 baz
4 find_big_files.pl
4 find_big_files.pl
4 find_big_files.pl~
4 find_big_files.pl~
4 find_big_files.pl.13MY08
4 find_big_files.pl.13MY08
4 find_big_files.pl.dead_end
4 find_big_files.pl.dead_end
4 NOTES
16 shirt_back_no_aflac.gif
16 shirt_front.gif
abc@zaphod$

It's also *very* slow, on a bigger directory tree, I've had it running
for about 15 min. now, and nothing has been printed.

After my Dr. appointment today, I'll maybe look into the man pages for
ls and sort and see if I can figure something out. I already went down
that route before the perl script, and got nowhere useful, but as I
said, I'm pretty sure a one line solution exists.

BTW, here's what the perl script puts out:

abc@zaphod$ find_big_files
-rw-r--r-- 1 abc abc 14995 2008-05-13 07:48 ./baz/shirt_front.gif
-rw-r--r-- 1 abc abc 12465 2008-05-13 07:48
./baz/shirt_back_no_aflac.gif
drwxr-xr-x 4 abc abc 4096 2008-05-13 07:47 .
drwxr-xr-x 2 abc abc 4096 2008-05-13 07:48 ./baz
drwxr-xr-x 2 abc abc 4096 2008-05-13 07:46 ./bar
-rwxr-xr-x 1 abc abc 920 2008-05-13 07:46 ./bar/find_big_files.pl
-rwxr-xr-x 1 abc abc 920 2008-05-13 07:46 ./find_big_files.pl
-rwxr-xr-x 1 abc abc 804 2008-05-13 07:46
./bar/find_big_files.pl.dead_end
-rwxr-xr-x 1 abc abc 804 2008-05-13 07:46 ./find_big_files.pl.dead_end
-rwxr-xr-x 1 abc abc 695 2008-05-13 07:46 ./find_big_files.pl.13MY08
-rwxr-xr-x 1 abc abc 695 2008-05-13 07:46
./bar/find_big_files.pl.13MY08
-rwxr-xr-x 1 abc abc 695 2008-05-13 07:46 ./bar/find_big_files.pl~
-rwxr-xr-x 1 abc abc 695 2008-05-13 07:46 ./find_big_files.pl~
-rw-r--r-- 1 abc abc 34 2008-05-13 07:48 ./baz/NOTES
abc@zaphod$

 
Answer #8    Answered On: Dec 26    

Many thanks to you - both of your answers have now found a permanent
place in my toolbox.

 
Answer #9    Answered On: Dec 26    

No problem.

I just thought of a way to get the results I like in a one liner, here it is:

find . -exec ls -ld {} \; | grep -v ^d | sort -n -r --key=5 | less

Of course, you could also redirect the output to a log file:

find . -exec ls -ld {} \; | grep -v ^d | sort -n -r --key=5 > /tmp/log_file

It's slower than the perl solution, but having thought of both, it is
cooler. I think that I'm going to refine the perl script a bit and add
something like a progress bar so that you can see that it's still
running, and make it more robust. It's something I want to keep
around. OTOH, if I still worked as an admin, I'd save the one-liner in
my Palm Pilot so that I could type it in whenever iI needed it.

 
Answer #10    Answered On: Dec 26    

I am pretty new to Ubuntu... is there a simple
way to log in as root and delete a directory?

 
Answer #11    Answered On: Dec 26    

I just couldn't let this go. Both the one liner and the perl program I
wrote were too darn slow for me. So I wrote a more efficient perl
script that does about the same thing. It's a bit fancier as well, it
prints progress dots as it walks through the directory tree to show
that it's still running and you can pass a starting directory on the
command line if you want. It's about 14 times faster than either of
the other two methods.

You can read about it and/or download at:

http://www.buchanan1.net/find_big_files.shtml

Note that the ftp link is on my other web provider, as they offer
anonymous ftp, so don't be surprised that the file is off-site.

 
Answer #12    Answered On: Dec 26    

Open terminal type in sudo nautilus. It will oppe file browser in root. just be
careful.

 
Answer #13    Answered On: Dec 26    

Try : rm -rf your_directory

in a bash shell.

 
Answer #14    Answered On: Dec 26    

I no expert on this. But I have heard through ubuntu forums not to use rm-rf.
It could damage your system.
Check on
google "rm-rf". There is a lot on it. Sudo nautilus might do what you want
done..

 
Answer #15    Answered On: Dec 26    

There is an application called FSLint that I have used to look for duplicate
files before. I think you could search by file size with that program as
well.

 
Didn't find what you were looking for? Find more on Finding largest file Or get search suggestion and latest updates.




Tagged: