Am I crazy? Last night I had an idea in my head while thinking about how FP numbers work in my head -- you know, how the z80 even has a DAA instruction to shift to \$00 to \$99 decimal digits after an addition. However, think about this:

can a byte hold more than two decimal digits, when not coded in BCD format?

99 in non-BCD could be represented as 9*10+9, which is \$63 in non-BCD. but, what about if each digit was just 10 (in decimal) apart in number? Let's say 99 is actually 9+10+9, which is in decimal only 28 -- but did you see what I think I did there? you see, every \$0A increment you climb in the byte, you can store a whole digit! therefore, 678 is (6+20)+(7+10)+(8 ), which is 26+17+8, which is 51 in decimal normally, but in packed form can be 51! Now, unpacking it would be very tedious... and would probably be extremely hard to do without completely ruining the number we're after. It's very hard to think about and hurts my head because you have to think of things in both binary and decimal form at the exact same time for it to make any sense.

Maybe this makes no sense anyways I have, of course, never really looked into bit mangling... That could debunk this whole thing.

With that in mind, if this is as impossible as spontaneous generation, are there actual ways of fulfilling this, by packing a value more than 256 into a bit based byte?
I don't see how you could unpack it at all, after all, 31 and 13 would both be 16, right?

OTOH, yes you can pack more than 2 digits into 1 byte - log(256) / log(10) to be exact. Which is somewhere around 2.4

That's why a IEEE Double packs more precision into 8 bytes than a TI float in 9 (15 digits vs 14) - and the difference would have been bigger, but a Double uses more bits for the exponent as well.

edit: a Double hold 15.9something digits, rounded down to 15.
Ashbad, any scheme that you use that manages to compress more than 256 binary values into a byte will inevitably have collisions between at least two of those values. So yes, you are crazy
not true, 13 would be 13 and 31 would be 31 you see, IF you can unpack it then you would go backwards, so you would do modulus 10, add that to a result number, divide compressed number by 10, repeat, etc., but more than just that, so that each step of nine isn't actually 9, but a full digit.

The more I think, the more I think I'm either really crazy or really into a weirdly possible idea.

QWERTY: AH NO I BE TEH CRAZY

see what 11 hours of digging does to you?
I'm pretty sure it's not possible. Let me explain it better: The binary values 0x0000 0000 through 0x1111 11111 are all of the unique ways of arranging bits in a byte. There are thus 256 unique types of bytes. If your algorithm is perfectly deterministic (which it better be if it's to be useful), then it can transform each unique byte into another unique number by a direct mapping. There's no way to deterministically map the set of bytes to another larger set of numbers without more information.
The problem, as others mentioned, is that nearly every possible packed value in this scheme could represent many different unpacked numbers. If a packed value of “51” could represent (6+20)+(7+10)+(8), it could also represent (6+20)+(8+10)+(7) or (5+20)+(9+10)+(7) or (9+20)+(7+10)+(5) or... how would the unpacker know which one it should be?

In theory you can encode values with an arbitrary number of digits into a single byte, but not without placing heavy restrictions on the number of values that can be encoded—256 unique values max. There is simply no way to unambiguously represent more than this number of possibilities with 8 bits. Otherwise, we'd have compression programs that can compress absolutely any file into a single bit by now. ;)
Qwerty.55 wrote:
There's no way to deterministically map the set of bytes to another larger set of numbers without more information.

Yes there are - that is how compression works
Kllrnohj wrote:
Qwerty.55 wrote:
There's no way to deterministically map the set of bytes to another larger set of numbers without more information.

Yes there are - that is how compression works
Compression has more information - namely the algorithm that performs the compression in the first place. A positive proof that you're wrong is the Shannon Information Entropy principle. I'd try reading the following article, but without a decent math and EE background, it might go over your head:

http://en.wikipedia.org/wiki/Information_entropy
Kllrnohj wrote:
Qwerty.55 wrote:
There's no way to deterministically map the set of bytes to another larger set of numbers without more information.

Yes there are - that is how compression works

No compression algorithm compresses all strings
Err, every compression algorithm compresses all strings! All of them can't achieve a nonzero compression ratio for all strings, though.
Well, if you want to be technical about the use of the word "compress..."

Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

»
» All times are UTC - 5 Hours

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum