POSIX defines ino_t to be of an unsigned integer type, and searching
around the net didn't tell me of any definitions conflicting that. So
every ino_t can be represented in an uint64_t. (Assuming that is the
largest integer type in use for an inode number, but I'm sure that
assumption will hold for a while)
(dev_t, on the other hand, is a bit messier. Still figuring out what to
do with that.)
2 billion files should be enough for everyone. You probably won't have
enough memory to scan such a filesystem. int is a better choice than
long, as sizeof(int) is 4 on pretty much any system where ncdu runs.
This allows scanning stuff without initializing ncurses. Not too useful
at this point since ncdu will switch to an ncurses environment when it's
done anyway, but this will become more useful when the export-to-file
feature has been implemented.
The architecture is explained in dir.h. The reasons for these changes is
two-fold:
- calc.c was too complex, it simply did too many things. 399ccdeb is a
nice example of that: Should have been an easy fix, but it introduced
a segfault (fixed in 0b49021a), and added a small memory leak.
- This architecture features a pluggable input/output system, which
should make a file export/import feature relatively simple.
The current commit does not feature any user interface, so there's no
feedback yet when scanning a directory. I'll get to that in a bit.
I've also not tested the new scanning code very well yet, so I might
have introduced some bugs.
This fixes a bug where ncdu would stop scanning a directory if the
terminal window has been resized to a small enough space that the
warning would show up.
POD is somewhat more simple and flexible. I now use ncdu.pod to generate
a nicely formatted manual page on the ncdu homepage, rather than
displaying a rendering of ncdu.1 formatted in a monospace font.
The tarball will still contain an ncdu.1, so there's no extra dependency
on pod2man. (Unless you clone from git, since ncdu.1 isn't in the repo)
This should be a *significant* performance increase when scanning a
directory that has many hard links.
I used the khash library written by Attractive Chaos[1]. This library
fits perfectly into ncdu's "use as little memory as possible but still
try to be very fast"-policy. It's API is somewhat quircky in use, but I
guess that is to blame to the lack of generic programming support in C.
Blog: http://attractivechaos.wordpress.com/
Lib: https://github.com/attractivechaos/klib/blob/master/khash.h
This used to be the default before 1.5, but for some reason the default
changed in 1.5 and 1.6. Changing it back now, because the graph really
is useful, and there's still enough space for the filename even in
smaller terminals.
This solution is far cleaner. Thanks to Ben North for pointing me to the
*-width-specifier that has apparently been built into the printf-family
functions for, well, quite a while, it seems.
The memory for this format is now statically allocated as well. I
was under the impression its size would depend on wincols, but this is
the format we're talking about, the string does not have to hold the
actual line contents. I must have been sleeping again...
Oh well, this is a slight performance improvement, although it doesn't
seem the be the cause of the browing slowness when running under
valgrind. (Obviously running ncdu with valgrind is supposed to be
slower, but the current performance is rather bad...)
Rather than storing a pointer to another memory allocation in the
struct. This saves some memory and improves performance by significantly
decreasing the number of calls to [c|m]alloc() and free().
Here is the new multi-page listing functionality I promised in
5db9c2aea11052451c7e11bf8eef73393e4a072e.
It may look very easy, but getting this to work right wasn't,
unfortunately.
This optimizes a few actions (though not all), and makes the code easier
to understand and expand.
The behaviour of the browser has changed a bit with regards to
multi-page listings. Personally I don't like this change much, so I'd
probably fix that later on.
The displayed directory sizes are now fully correct, although in its
current state it's not all that intuitive because:
directory size != sum(size of all files and subdirectories)
This should probably be fixed later on by splitting the sizes into a
shared and non-shared part.
Also, the sizes displayed after a recalculation or deletion are
incorrect, I'll fix this later on.