'char' may be unsigned on some architectures, which will cause the
"overflow check" on decrement to fail.
This would at most result in a confusing UI issue where no confirmation
option appears to be selected.
Unfortunately, there wasn't a single bit free in struct dir.flags, so I
had to increase its size to 16 bit. This commit is just the initial
preparation, there's still a few things to do:
- Add "extended information" cli flag to enable/disable this
functionality.
- Export and import extended information when requested
- Do something with the data.
I also did a few memory measurements on a file list with 12769842 items:
before this commit: 1.239 GiB
without extended info: 1.318 GiB
with extended info: 1.698 GiB
It's surprising what adding a single byte to a struct can do to the
memory usage. :(
Fixes https://dev.yorhel.nl/ncdu/bug/103
I don't think a stack overflow as a result of recursion is exploitable
on a modern system. It should just result in an unfortunate write to a
page that is not writable, followed by a crash.
I've decided not to use ls-like file name coloring for now, instead just
coloring the difference between a (regular) file and a dir.
Still looking for a good color scheme for light backgrounds.
This should fix https://dev.yorhel.nl/ncdu/bug/99 - with the downside
that this requires a C99 compiler.
I also replaced all occurrences of static allocation of struct dir with
use dynamic allocation, because I wasn't really sure if static
allocation of flexible structs is allowed. In the case of dirlist.c the
dynamic allocation is likely required anyway, because it does store a
few bytes in the name field.
TODO:
- Add (ls-like) colors to the actual file names
-> Implement full $LS_COLORS handling or something simple and custom?
- Test on a white/black terminal, and provide an alternate color scheme
if necessary.
- Make colors opt-in?
Check if the environment variable NCDU_SHELL is defined before the SHELL
variable is checked. This makes it possible to specify a program to
execute when 'b' is pressed. Setting SHELL to for example "mc" (Midnight
Commander) didn't work because mc already uses SHELL to execute
commands.
The check for the system() exit status is slightly problematic, because
bash returns the status code of the last command it executed. I've set
it to only check for status code 127 now (command not found) in order to
at least provide a message when the $SHELL command can't be found. This
error can still be triggered when executing a nonexistant command within
the shell and then exiting.
Key 'b' in the browse window spawns a shell in the current directoy.
We first check the $SHELL environment variable of the user for the preferred
shell interpreter. If it's not set, we fall back to the compile time
configured default shell (usually /bin/bash).
Signed-off-by: Thomas Jarosch <thomas.jarosch@intra2net.com>
Turns out that being able to open an empty directory actually has its
uses:
- If you delete the last file in a directory, you now won't be directed
to the parent directory anymore. This allows keeping 'd' pressed
without worrying that you'll delete stuff outside of the current dir.
(This is the primary motivation for doing this)
- You can now scan and later refresh an empty directory, as suggested by
#2 in http://dev.yorhel.nl/ncdu/bug/15
Tiny bug fix: The size of an excluded directory entry itself should not
be counted, either. This is consistent with what you'd expect: A cache
directory with thousands of files can easily take up several megabytes
for the dir entry - but from the perspective of a backup system that
recognizes cache dirs - the dir is empty, and therefore shouldn't take
any extra space at all.
Use a macro instead of the global constant `cachedir_tag_signature`.
Use `memcmp` instead of `strncmp`.
Add `has_cachedir_tag` to exclude.h.
(See http://dev.yorhel.nl/ncdu/bug/30)