Cached with the author's permission -- He emailed me! =)
Every commodore 64 - scener has most likely used these fiendish, evil, little utilities known as "crunchers". Whatever work of yours is about to finish - your demo or your crack - one of your last steps in completing your work will most likely be to compress it. And what a step it is! Many a cracker spent a whole night waiting for the compression program to finish its work. A 230 block program which is already packed with an equal-char-packer may well eat up to seven hours of crunching. Thats not exactly overwhelming if you consider that people might want to finish (and, hell, yes, PACK!) their demos at a party in time! Other, more clever folks, may have tried to convert their data to another computer, in order to find out that compression utilities on these devices tend to finish their work faster by dozens of times! Obviously, this cannot only be done by sheer faster computing power... the algorithms on other computers are supposed to be better!
In this article, we will take an in-depth look at how an equal-sequence-cruncher works on the c64, and we will find the reason why he is so slow. We will find ways how to speed up things, and what means, hardware- or software-wise, are required in order to accomplish that. At the end of the article, we will try to optimize all these means and will generally tend to juggle around with them in order to get fast compression with a reasonable amount of extra hardware (most notably, memory). Surprisingly, we will also try to use other hardware to improve crunching!
Normal Operation ∞
In order to improve anything, we better got to get a deep understanding what's going on already. So, what, basically, does a LZW (equal-sequence) cruncher do? The answer is, of course, he scans for similar strings in a given file, and tries to eliminate them, so they appear only once. Let me try to draw an example:
An input file consists of the following bytes (all values in hexadecimal):
ADDRESS BYTES $1000:::: $00 $01 $03 $03 $0a $0d $01 $03 $03 $0a $03 $03 $0a
Yes, this string is packable very well. A LZW cruncher will detect all similar strings, and will only store the largest of the similar string in the output file. Any time the string should appear in the decompressed memory, the LZW cruncher will store a some special codes instead of the string there. These special codes will tell the uncompressor where to look for the bytes, and how many of them.
In the example above, the result might look like this :
$00 $01 $03 $03 $0a $0d (<code for get "earlier mem"> <code for "4 bytes"> <code for offset>) (<again code for "get earlier mem"> <code for "3 bytes"> <code for "offset">) (<again code for "get earlier mem"> <code for "3 bytes"> <code for "offset">)
The decruncher needs to know at least 3 things:
- shall I simply put these bytes into memory or shall i get a string from the memory earlier?
- If I get a string, how many bytes do I have to copy, and
from what memory address?
There are really a hell lot of methods to encode your information so that the decruncher will know -in an orderly manner- when to simply put data into memory, when to get a string from memory, and when to stop. For example, some compressors ALWAYS think they have to copy data from memory unless explicitly told otherwise! These compressors have "chunks" of data everywhere, which are superseded by a header information in front of each chunk of data, much like every disk sector on the disk has a header telling the disk drive where it is.
However, we are not interested in the format of the headers and bytes that are encoded. We are interested in why the encoding process is so slow, notably on large files, and what we can do about it.
To understand this, let us return to the example above:
ADDRESS BYTES $1000:::: $00 $01 $03 $03 $0a $0d $01 $03 $03 $0a $03 $03 $0a</P>
What is the cruncher supposed to do? The cruncher has a pointer, say,
$0002/$0003 (2 zeropage addresses) to our program at
$1000. it looks in the whole memory from
$100c for a sequence. Here, it is reasonable to assume any sequence consists of at least 2 bytes, so the cruncher will look in
$100c for the bytes "
$00 $01". He will not find them, so he will simply output
$00 $01 without any special notice.
Next the cruncher will try to find the next 2 bytes,
$01 $03 in the memory, and he will find them (at
$1006) and proceed to crunch them.
Crunchers operation will continue at
$1005 (at the
$0d byte), then at
$01 $03 again), then at
$100b (another sequence).
So basically, on the c-64, the cruncher is stuck up in a double loop. For each 2 byte sequence there is in the memory, the program will scan the whole other memory! This looks (and behaves) much like a basic loop in the following style:
For i=1 to 50000 do for j=1 to 50000 do .....(crunching algorithm) next j next i
A double loop is, unfortunately, not a real fast or favourable way to handle stuff like this. What do we need to speed up all this? The answer is, after due consideration, we have, somehow to avoid the double loop. The first idea a programmer might have, is: "Hmmm. It would be something wonderful if we had some big, big arrays that contained all the memory addresses of all sequences!" Any time the computer looks for a sequence, he would scan the array instead of the whole memory.
How would such a code look like?
A Simple Idea of Improvement ∞
We remember that the cruncher tends to look for a sequence that is at least 2 bytes long. Well, then it would be cool to have some HEAVY extra memory and have all addresses of all 2-byte-values in some big arrays. Each array would have to be 64 kb big, of course, since our 64kb input file may consist of only these 2 bytes appearing all over again. Let us, for a (strange) moment imagine that we have had a sort of super-c64 with infinite memory, and try to code some "search next sequence" routine. Somehow (magically) we already have the addresses of all 2-byte-values in some big arrays:
We have: all addresses of all
$0000 values in an array going from
$0001 $0001-0000 $0001-$ffff $0002 $0002 0000 $0002-$ffff . . $ffff $ffff-0000 $ffff-$ffff
Ui, a rather big bunch of 65536 arrays,
you will say. Yep. And how would our search routine look like? Answer: The
search routine would look very simple, we imagine we have a super-c64 that can
address 32 bit addresses:
ldy #$0000 (16 bit y register, rite) loop lda $ea2d0000,y (we are looking for the sequence $ea2d) (akku is a 16 bit accu, miraculously) beq end_of_crunch (if no more sequences, end) jsr encode (crunch slave to the work!) iny bne loop end_of_crunch rts
Wow, this would be great! Just some self-modifying code at the "loop" label, and we are set.
However, life is not that simple. We can code the example above with 8 bit registers (akku and y register, typically by using zeropage) but what's not so easy to get is a commodore-c64 with no less than 2^32 bits ram = 4194 megabytes of ram!
Obviously, we have to think of something less memory-eating. Here is how.
Life is not so simple at all ∞
The basic idea is to use a chained list. A "chained list" is a rather clever data structure that is composed of, basically, three things:
a) An easy-to-find start pointer to the chained list
b) the middle of the chained list, consisting of link and data (in whatever order you decide)
c) the end of the list
Case b) is unique : The middle of the chained list consists of two things. The "link" is a pointer to where I can find the next chain of the list. "DATA" is any data we want, and what we want, is, of course, still, the 16-bit address of where our 2-byte-sequence resides in c64's normal memory. We might reserve special values for "link" and "data" for special reasons! For example, it is reasonable to assume that our input file will never be larger than
$0801-$ffff, so we might assign special meanings to the values
$0000-$07ff as "DATA". However, all for now, we only need one special value to help us detect case (c) (which is, the end of the list), so we will use a combination of
link=$0000 for this very case.
Most readers are getting uneasy here, so let me sum up the monumentous task that lies ahead of us:
We are going to split the task of LZW crunching in TWO parts. Part one will be to construct an array of big chained lists, and for programming convenience reasons we are going to waste all extra 512 kb ram of external memory just for this "big array of chained lists". This big array of chained lists will contain all the addresses for all 16-bit-values that we can find inside the c64's memory (which, of course, contains the data we want to crunch). As you will see later, it really is convenient to have so much RAM to waste. Trying to make all this less ram-wasting tends very much to increase programming work!
Part two of our work is dedicated to acually make use of these mess of chained lists, which is rather simplistic anyway. For any sequence that lies ahead of us, we will look into the chained list of this specific sequence in order to determine where the next 2 bytes of this sequence are located inside the c64's memory. This is almost as simplistic as the imaginary 16-bit-supercode program that was mentioned above, so hold your breath, it's worth the trouble! But first lets write some code to actually use the big bunch of "pseudo arrays with links in between".
A tour de Memory ∞
Let us try to draw a memory map:
The 512 kb REU is being split into 8 banks, bank 0 up to bank 7, so lets try to make use of that:
We will use:
- Bank 0 to hold the low byte of all start pointers to all chained lists
- bank 1 to hold the hi byte of all start pointers to all chained lists
- bank 2 to hold the low byte of all END pointers to all chained lists
- bank 3 to hold the high byte of all END pointers to all chained lists
bank 4-7 will hold all the chained lists.
"Wait a moment! Where is the middle byte of the pointers?" Some clever readers may ask. "You see, we have 512 kb ram, well, at least 256 KB ram to access but only 16 bits (2 bytes: low and hibyte) of pointers! Where are the other 2 bits? "
Well, this is a valid question. Indeed if we have to have pointers to a big area we need more than 16 bits...DO WE? There is a little trick clause here: We assume that our small pieces of chains are at an even address only, since they take 4 bytes for each piece of link. So, each piece of link will be at an address like
$xxxC. There is no need to store the least significant 2 bits, since they will always be zero! Nice trick, huh? This also means, we will have to shift 18 bit addresses to 16 bit addresses, and when we get them back, we will have to shift 2 zero-bits in again to get the 16 bit address. Simple, but saving tons of memory!
Anyhow - how are we going to construct this list?
First, we are going to clear the whole 512 kb of extra RAM with zero. You will see that this task will not only clear everything but save us lots of work later.
What we, basically, need, is a single integer-variable, a sort of a counter, that keeps track of where some free memory in the REU resides. Every time we insert some of our chunks into the REU memory, we will have to increase this 16-bit-counter in order to keep track up to where we have wasted REU memory for our chunks.
Now, we will start with a pointer to the start of the data that is to be packed. We will get 2 bytes there. (Let us assume, for readability, 2 random bytes, e.g. $ea $db). So, we will try to get the start pointer. If the start pointer is
$0000 (no start has been set yet), the solution is simple: put a chunk of data into the next free REU memory (our counter points there), then put the start pointer (REU pages 0 and 1) there, and also put the end pointer (REU pages 2 and 3) there.
If, however, the start pointer is not
$0000, it means the chunk already exists (somewhere in REU memory) so we have to follow a more tricky strategy of memory allocation: First, we will still put our chunk at the REU place the counter tells us, but then we will look at the end pointer (pages 2 and 3 of REU) for the selected linked list, and change the link info from
$0000 (end of list) to
$xxxx (whereever the counter told us the "free" mem in the REU is).
In all cases, at the end of this routine we have to correct the end-of-list pointer from the old value to the new value (our trusty counter tells us where).
This was tricky, wasn't it? Lets look together at the source code that does it all. We will mainly assume our part 1 code has been properly inserted into any version of Darksqueezer, and a
JMP command was nicely placed into the DSQ cruncher after he loaded all c64 mem with the data to be packed, and before the actual packing process starts.
The source part 1 ∞
------------------------------------ (insert coffee break here) -----------------------------------------
If you have done all your homework and carefully followed the mazelike array of linked lists, you might have been somewhat uncomfortable. No, not due to the fact that you might be lacking brains to follow all these links, but that so much memory has been wasted. After all, do we really need these none less than 128 kb memory.....just to find the end of the linklist, stored in bank 2 and bank3 ?
Answer: No, we did not really need these extra 128 kb. Instead of having a huge array of end pointers, we might follow the list up to its end every time we decide to chain another chunk to the end of the list. However this would mean we had to scan the entire list each time we decide to insert another chunk. This is eating valuable crunching time, thus is rather unacceptable.
Phew! We are nearly set! Now let us take a close look at what DSQ really does 99.5% all of the crunching time, and lets write the code to improve that performance by using the carefully constructed maze of lists above. Much to our pleasant surprise, we will find out it's rather simple to code and short too!
But first let us disassemble what DSQ does 99% all of the time. Suck and see: If you crunch a large file using DSQ 2.0 or 2.2 and if you press your freezer cartridge button, you will most likely end up inside a little code fragment that's inside the zeropage. Let's disasm that one (and yes, of course it's some clever self modifying code....):
More sources ∞
Ok, so here is the summary: This heavy self-modifying beast searches for the next sequence by scanning the whole memory for suitable (equal) strings.
$02 seems to hold the current "at least wanted" string length. $0030 and $0031 hold the start of memory,
$0048 hold a pointer to the sequence in the middle of the data. This is rather important since we will have to use the same pointers in our source.
What can we do about it?
$31 hold some important pointer, so it would be unwise to modify anything here. The
BEQ command at
$0033 actually could serve us to do something useful. That is to say, if a sequence seems to be immediately ahead, this very
BEQ will be executed. So we will leave it alone as well. But, at
$0035 the INY is part of the dreaded slooow loop that scans the memory. We will mercilessly place a
$0035 to our own code:
$0035 JMP $0338
As you will soon see, we will have enough memory in the .... TAPE BUFFER ... to do all the scanning that will replace the loop at
$0035 up to
$003e. Yes, our routine will be longer, but not that considerably longer!
Let's sum up again what our routine has to do:
Our routine has to scan for a sequence, whose LSB we know, (in the accu) and whose hibyte is in
$0031. The sequence address of the same sequence must be bigger or equal than the number we have identified to reside in memory location
47/48 also point to a sequence with the same two bytes as in
$0030/$0031. We will use pointer
47/48 in our case only.
We will add some special flags that will make our life easier if multiple instances of the same sequence are being needed each "call" to our routine. (Well its not a call, its a jump, but who cares, you get the meaning).
Simple enough? Yes it is. Lets look at the code now!
The source part 2 ∞
Summary and final analysis ∞
Uh, this was quite some code. Can you sum up yourself what it does?
It looks for a 2 byte sequence similar to the one at
$0047/8. If that is the same 2 byte sequence that the one we have been looking for earlier (a very common case!) then we will simply re-use the old registers with the proper set values to follow the link. If not, we scan the chained list fromt the start. In any case, we determine if the c64 address from the chained list's DATA is smaller than the current
$47/48 address. If so, we proceed to scan the list either until the end of the list or a proper address was found. If a proper address was found, we store its value into
$30/$31 and jump back to the crunchers routines. Otherwise, the end of the chained list will be reached soon, and the program will exit with code "sequence not found".
Instead of scanning the whole memory, we now scan a rather tiny list (tiny: compared to the whole memory that is). If you press reset in the middle of crunching and if you decide to look at (and follow) these chained lists in REU memory yourself, you will find out they are rather small, compared to a whole 60kb of memory to scan!
So actually, what is the worst case? The worst case would be e.g. a input data file consisting of ONLY 0 - bytes. There would be only ONE sequence in this whole file, and only ONE single chained list would be produced by the program from pass one. These are those rare cases where normal Darksqueezer also tends to go haywire (i.e, very slow). No surprise you are advised to crunch your data first with an equal-character-cruncher to eliminate such cases.
Let us rather look at a typical case. Let's look at a game like "ELITE". It does have some sequences that can be eliminated, but, obviously, not too many of them. An equal-char packed version of ELITE has something like 203 blocks, the packed version is 161 blocks. 42 blocks can be gained.... this is impressive, but not too much. Normal DSQ needs 1 hour and 35 minutes to pack it. Our improved REU version takes gasp! Only one minute and 35 seconds! What an exciting improvement! A gain in speed by 63 times, with the same output result, you know! Very, very impressive. And how can we explain this? One of the main speed gains is the fact that our chained lists are rather small and thus, convenient to scan. The other (main) reason is, they immidately tell us when it is not worth looking for a sequence! Many times, DSQ normally scans the whole memory for a sequence, but, of course FAILS! Because the sequence only appears once, or realllly few times. Now, with our brushed-up REU routines, DSQ next-to-immidately knows when its not worth looking for a sequence! If a sequence appears only once, its linked chain is only one item small, and DSQ immidately knows its not worth scanning the memory! A very typical case when you crunch data that is difficult to crunch (e.g. an already crunched program.....try the difference yourself!)
If, on the other hand, we have input data which is VERY WELL crunchable (i.e. "Dizzy down the Rapids", an old game) we will acknowledge that crunching takes considerably longer than just one minute and 35 seconds. On dizzy down the rapids, it took like 15 minutes... the longest crunchtime I have experienced with my 512kb-REU-Darksqueezer version! Pretty long? No way - CruelCruncher needed the whole night again -with 3 blocks WORSE result. (So far about "superior packers"...I think I know why I really stuck to DSQ all my time.....)
But of course, I still hear some people moan that they have only a 256 kb ram expansion (and, arguably, still some more time left to waste for crunching). Lets see what we can do for these folks....but not tonight! I will leave this for another, and hopefully kewler, article!
Antitrack/Legend in 1998 !!!