@ kl522
Thanks. I was hoping for a lot more difference.
You should be aware of some history, if you're not:
google <squashfs cloop knoppix 2004 klaus>
Thread from 1/25/2005
http://www.knoppix.net/forum/threads...xperimental%29
Thread from 11/12/2005
http://www.knoppix.net/forum/threads...squashfs-rocks
There maybe some truth in some of the older threads. However, the then cloop and the then squashfs might be different now. For software/hardware, 6months or 1 year is already a big difference. So I am not too sure of their relevance in todays view.
Technical analysis might be useful. But unless it is subtantiated with real life data, otherwise it is academic. I dare say in order to see the significant and conclusive difference in terms of performance, one will have to carefully design experiments to observe them. Otherwise at a gross level, especially on today's hardware, it will be hard to notice any difference.
And for usage history, I put my bet on squashfs. You see them in almost all of the embedded linux devices today. These devices have much more stringent CPU and memory contraints than a typical notebook/desktop.
@ kl522
I would like to sign on to krishna's post #5 and
suggest Klaus K. probably has good reasons for lagging
the squashfs effort. Among other reasons is Ubuntu
and Fedora are plowing this ground. KK is a one-man
effort AFAIK.
This is not to diminish your effort or Forester's,
merely to suggest that we keep our discussions relative
to computer metrics, and not spend any time on long-
distance prognostications on ulterior motives.
Foresters decompress times on 'a machine at work'
seemed attractive. What were its parameters?
How does that machine compare to my Laptop/SDCard rig?
Since you use the word "probably", so it is belief-based. Period.
The time difference in compression is fully explanable, as I have already mentioned.
Likely Forester did not use lzma compression for squashfs. Lzma has the behaviour that it takes a long time to compress but decompresses very fast. When he uses '-b' for cloop, that will result in using lzma and gzip. The gzip time is insignificant compared to lzma.
But the thing is even without lzma, squashfs results in smaller image - if that experiment carried out by Forester is correct. ( In my posts sometime ago, I use gzip for both cloop and squashfs, it also proved squashfs has smaller image). Once you use lzma-squashfs, you will see about another 20 % smaller image.
Ladies, gentlemen, please. "Calm down. It's only a commercial". One set of results does not prove anything and should not be used to jump to conclusions.
The improvement in boot time at work does not prove that squashfs is faster than cloop. It is an unexplained side affect. The slow part of the boot is the udevprobing. Why should that be hitting the compressed file system ? After the green bar has gone as far as it will, the spinner shows the system is still working. It spins for a long time with cloop but not with squashfs. Comments in /etc/init.d/knoppix-autoconfig suggest to me the boot is waiting for i/o activity to die down. What i/o I don't know - the VirtualBox console indicator shows no USB i/o at this time.
I did say (perhaps not clearly) that I used the Squeeze version of mksquashfs, which depends (have a look on the Debian repository web-site) on a gzip library, not an lzma library. Ergo, I used gzip compression. The same web-site shows the Sid version of mksquashfs, depends on several compression library packages, including liblzo2-2, which supports lzma and lzma2 compression.
kl522 says that Linux kernel 2.6.38 contains lzma compression but I am using Knoppix 6.4.3, which runs atop Linux kernel 2.6.36. I infer that had I used lzma compression, I would not have been able to boot my squashed file system.
There are some that say cloop requires more memory than squashfs. Perhaps, I don't know but the arguments for this that I have read so far appear specious to me.
Linux manages the disk cache. If memory isn't required for anything else, Linux will use it to cache disk contents but as soon as memory is required for something else it will free up disk cache. This is why, with two otherwise identical machines, the one with the more memory will appear to run faster. It is also why, at the time, Linux 'ran faster' than Windows 98.
The disk cache management is independent of squashfs and cloop. Because cloop is a loop device, data gets cached twice - once before and again after decompression (also true of knoppix-data.img but without the decompression). This is given as 'proof' that cloop requires more memory than squashfs. Poppy-cock.
Somehow, somewhere, squashfs must be buffering (aka caching) data before decompression. If its cache is too small, it will have to 'hit the disk' more often. Might use less memory, but that would make it slower.
Block size might be more significant and might give different results for different users. I used 64 Kb blocks from the cloop example on the Wiki. The squashfs man page says its default is 128 Kb. kl522's examples appear to be using 256 kb.
Now, if you are starting up mega Windoze-like applications that require tens of Mb to display an OK button, big blocks are probably going to make your app start-up faster. How often do you start these programs ? Once per session, so you don't need the disk cache.
If you are a sentimental old Unix fuddy-duddy who is reluctant to say goodbye to the power and productivity of the old command line interface, you want the disk cache to cache the commands you use a lot. The very idea that typing 'pwd' might cause squashfs to go off and read a 256 kb block and decompress it into a memory 'block' twice that size in order to run a program of only 25 kb in size is embarassing.
I have a litle problem:
today I ditched up my usb with knoppix on just for fun and messed around with it.
Don't ask me how but I ruined my minirt.gz
I have no linux on my laptop installed so I can't patch it myself again and this is needed as my usb install uses squashfs
Can someone share his?
... please contact me; I can send you minirt.gz from 6.4.4
listings@wp-schulz.de
thanks a lot werner.
for the people who want squashfs without problems(or with the same issue as me ) go here : http://dl.dropbox.com/u/15024434/initrd.gz
It seems that Klaus K prefers cloop because of versatility and well proven good properties. But that's of course "past experience", not really present. He has his good reasons, but I think testing out squashfs should be encouraged, as eventual problems and improvements will have a much broader impact than for cloop, and fixes therefore are more likely to happen.
We should aim for a clean extension of the mounting function in minirt init, to cope with squashfs. Klaus K is of course free to reject such a patch, but if we prove it to be useful, I don't think it will be rejected for very long. (Reminiscent of Kanotix forked off because of reluctance to HD installs... not much later we had 0wn...)
When running cloop compression without optimization, it goes almost as fast as squashfs, so compression time isn't that much of an issue. But, the resulting difference in compressed size amounts to about the last GB of programs stuffed into memory from a DVD size image, and that is practically relevant. With an ever increasing footprint of the basic system functions and utilities, it tends to get more rather than less relevant with time, too. I see this very clearly with my pure 64-bits version of 6.4.4.
In the 64-bits context, it should be noted that there have been some adaptation problems with cloop, and I'm not quite sure they are alle honed out by now, Klaus' latest cloop patch being from July 6, and it seemed to make some difference for 64-bits remastering. Where I can still not make busybox (1.1.17, amd64, Debian package, 1.1.18 compiled from source doesn't run at all) get the mounting right, even if I can do it manually in debug mode. I should add that using a freshly compiled 2.6.39.2 64 bits kernel did not help for this bug, only a few others
I'm going to try the squashfs alternative here, and if that goes through, it will be a strong argument pro squashfs, I think. I would also like to add that the way cloop is used in current Knoopix, it's really not file system agnostic, rather we have a (harmless at best) step via isofs, which introduces another set of potential problems. Maybe small and/or irrelevant, but most likely, not empty.
We should, at least, know what may, eventually, be to learn from other successful live distros, in particular Debian-based. Like Ubuntu live and Kanotix. BTW, I looked at the init for the Debian 6.0.1 live CD showcasing LXDE - think Knoppix is far better, and it wasn't a very good user experience either - I just needed it as a starting point for "pure" 64-bits Knoppix.
Dell Poweredge R630 2x Xeon E5-2680 v4 2.4ghz 28-Cores / 128gb / H330 / 2x 1TB
$334.99
DELL PowerEdge R730 Server 2x E5-2680v4 2.4GHz =28 Cores 64GB H730 4xRJ45
$313.00
Dell Poweredge R730xd LFF 14-Bay 2U Server | Choose Your CPU & RAM Config
$499.99
DELL PowerEdge R730XD Server 2x E5-2697v4 2.3GHz =36 Cores 64GB H730 4xRJ45
$407.00
Dell PowerEdge R620 Server 2x E5-2660 v2 2.2GHz 20 Cores 256GB RAM 1x 480GB SSD
$125.99
28 Cores Server 2GHz 96GB DDR Dell PowerEdge R730 H730 Raid Dual Port 1Gb RJ45
$418.00
Dell PowerEdge R730XD 28 Core Server 2X Xeon E5-2680 V4 H730 128GB RAM No HDD
$389.99
Dell PowerEdge R620 Server - 256GB RAM, 2x8cCPU, 120Gb SSD/3x900Gb SAS, Proxmox
$320.00
DELL PowerEdge R430 8SFF 2x E5-2640v3 2.6GHz =16 Cores 32GB H730 4xRJ45
$208.00
2U 12 Bay SAS3 SuperMicro Server 6028U-TR4T+ W/ X10DRU-i+ Barebone 12 Caddy RAIL
$299.00