Sunday, February 8, 2009

Deleting Old Files In Linux

When my scheduled database backup run, I have many old files we do not neet them anymore. So, I have a question for my self: How to remove them automatically just after backup is done?

So, I try to read 'find' command manual. The command is "man find" (without double quotes) if you do not know :) then I found these option:

-atime n
File was last accessed n*24 hours ago. When find figures out
how many 24-hour periods ago the file was last accessed, any
fractional part is ignored, so to match -atime +1, a file has to
have been accessed at least two days ago.


-ctime n
File’s status was last changed n*24 hours ago. See the comments
for -atime to understand how rounding affects the interpretation
of file status change times.

-name pattern
Base of file name (the path with the leading directories
removed) matches shell pattern pattern. The metacharacters
(‘*’, ‘?’, and ‘[]’) match a ‘.’ at the start of the base name
(this is a change in findutils-4.2.2; see section STANDARDS CON‐
FORMANCE below). To ignore a directory and the files under it,
use -prune; see an example in the description of -path. Braces
are not recognised as being special, despite the fact that some
shells including Bash imbue braces with a special meaning in
shell patterns. The filename matching is performed with the use
of the fnmatch(3) library function. Don’t forget to enclose
the pattern in quotes in order to protect it from expansion by
the shell.

-delete
Delete files; true if removal succeeded. If the removal failed,
an error message is issued. If -delete fails, find’s exit sta‐
tus will be nonzero (when it eventually exits). Use of -delete
automatically turns on the -depth option.


So, with options combination I get a command:
$ find . -name 'backup_*.db' -ctime +30 -delete

That is! I put the command in my backup script than run by cron every weeks and my old files will be deleted just after backup is done.