Muchas veces nos encontramos que nuestro servidor/maquina linux tiene casi toda la memoria RAM ocupada aunque el top no nos muestre ningún proceso consumiendo un exceso de RAM. Para mirar el consumo de RAM propongo 2 formas:
# top
y una vez abierto shift+m (ordenar por consumo de memoria)
# free -m
la m corresponde a megas, si queremos que nos lo muestre en gigabytes usaremos g, bytes b y kilobytes k.
Para ello hay una forma (no destructiva) de liberar esta RAM reservada a procesos que ya no están activos.
echo 1 > /proc/sys/vm/drop_caches
Para los que tengáis ganas de leer:
Invalidating the Linux Buffer Cache
When you write data, it doesn’t necessarily get written to disk right then. The kernel maintains caches of many things, and disk data is something where a lot of work is done to keep everything fast and efficient.
That’s great for performance, but sometimes you want to know that data really has gotten to the disk drive. This could be because you want to test the performance of the drive, but could also be when you suspect a drive is malfunctioning: if you just write and read back, you’ll be reading from cache, not from actual disk platters.
So how can you be sure you are reading data from the disk? The answer actually gets a little complicated, particularly if you are testing for integrity, so bear with me.
Obviously the first thing you need to do is get the data in the cache sent on its way to the disk. That’s “sync”, which tells the kernel that you want the data written. But that doesn’t mean that a subsequent read comes from disk: if the requested data is still in cache, that’s where it will be fetched from. It also doesn’t necessarily mean that the kernel actually has sent the data along to the disk controller: a “sync” is a request, not a command that says “stop everything else you are doing and write your whole buffer cache to disk right now!”. No, “sync” just means that the cache will be written, as and when the kernel has time to do so.
Traditonally, the only way to be sure you were not reading back from the cache was to overwrite the cache with other data. That required two things: knowing how big the cache is at this moment, and having unrelated data of sufficient size to overwrite with. On older Unixes with fixed sized buffer caches, the first part was easy enough, and since memory was often expensive and in shorter supply than it is now, the cache wasn’t apt to be all that large anyway. That’s changed radically: modern systems allocate cache memory dynamically and while the total cache is still small compared to disk drives, it can now be gigabytes of data that you need to overwrite.
Well, that’s not always so hard: for a large filesystem and relatively small memory, a simple “ls -lR” might be enough. If not, a “dd” redirected to /dev/null can fill it up. Just make sure that you are looking at different disk blocks than what you first wrote. Note that you really didn’t even need the “sync” if this is what you are doing: the overwrite forces the sync itself.
Modern Linux kernels make this a bit easier: in /proc/sys/vm/ you’ll find “drop_caches”. You simply echo a number to that to free caches.
From http://linux.inet.hr/proc_sys_vm_drop_caches.html:
To free pagecache:
echo 1 > /proc/sys/vm/drop_cachesdo not forget to use rsync before using above command.
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches
1 comentaris:
Muy interesante.!!!
Publica un comentari a l'entrada