So I wasn't sure whether to put this in here or the C forum since I work in C, but I figured this might be better off in here.

The cliffnotes is, as part of HASHLIB, I've been trying to write an alternative RNG (or seed for the toolchain's RNG) that is as close to crypto-safe as possible on the hardware. Of course, rtc_Time does not work for this. Another idea I had was to use a composite of 3 different randomness sources, 1 derived from rtc_Time in case the others are not entrophic enough, and the other 2 being pulled from elsewhere, for example, unmapped memory. After that, I would hash the resulting array of pseudorandom bytes, using either a CRC-32 or an SHA to generate either a random number, or a seed for rand()/random().

The existing code I have does this:
* I intend to optimize RandBytes once I get an RNG that works.
* Also feel free to yell at me if generating a random seed from a secure source and then using rand()/random() is not proper.

After a discussion with jacobly in SAX last night, I can assert that this method of generating randomness should be sufficient on Revision N, but not on models before that. As, I can get the intended behavior on L-1219N, but on L-0115, it's not random at all. For revision N, I used my CSPRNG to generate an array of 16-byte random strings long enough to fill up Cemu's console and printed them to the CEmu console and the output was entirely scrambled. I took that to mean success. On the other one (not sure which rev that was [L-0115]), there were pretty consistent patterns of the same bytes.

So I suppose I created this topic to share what I think works on RevN, as well as to poke for information on what may work on the other revisions.
Disclaimer: I am not a cryptographer or cryptanalyst by trade. If someone needs actual cryptographic guarantees (say, to secure anything of financial value or evade a repressive regime) they should not get them from this post.

For the unmapped address space on the TI-84 Plus CE (which should probably be the same on the 83PCE & CE-T variants), which bits are how random (including some that seem to be entirely non-random—always 0 or always 1) is revision-dependent (& might even vary by batch or individual calculator), but my revision C calculator definitely has some addresses that vary, & even has one bit of one address that seems less biased than anything on my revision N calculator I did find a single bit of a single address that was reasonably random on both, but I seem to have misplaced it (& lost the tables from which I generated that information...which is annoying, because it was from leaving both CEs running overnight to get good statistics on the entire space).

Both of have a pattern of probabilities that repeats every 512 bytes (as far as I can tell), so it is not literally "one best address" but more like one value of the last 9 address bits & index into the byte, with the rest of the address bits being able to vary with no apparent effect. It may repeat every 32 bytes on rev. N, but again, I lost the statistics & would have to rewrite the program that generated them & then re-gather them. (There are also shorter-range similarities that are not exact repetition, which reveal something about the layout of the SRAM but are unhelpful for our present purposes.)

Also note that the randomness is from certain bits rather than bytes, with the rest of the bits in the byte (at least the ones with the least biased bits) being much more biased. If you are hashing the data with a cryptographic hash function like SHA256, you can hash enough extra reads from the same byte to get enough entropy, but if someone were attempting to make a fast non-cryptographically-secure RNG, they would have to extract & pack the bits into bytes (which may be slower than a good-enough PRNG). You should only use addresses in the D65800-D72BFF sub-range, because D72C00-D7FFFF are connected to actual RAM (a copy of D52C00-D5FFFF) on revisions starting somewhere after C but definitely including I.

Regarding how many reads are needed to get a given level of entropy: I found that XORing together whichever bit of whichever byte it was some smallish number of times (in the 4-8 range, I think) & then packing into bytes resulted in a data stream that passed the DIEHARDER tests with around 1 GiB (give or take an order of magnitude) of samples, indicating that it is statistically random, but that certainly does not mean it is sufficient for cryptographic purposes. It does suggest that there is at least 0.125 to 0.25 (depending on how many XORs I used) bits of entropy per read from that one address, so one could probably read it 8N times & then hash it to get N bits of randomness.

Technical explanation of the source of randomness:

SRAM has a pair of bitlines for each bit of each column address, which are connected to a circuit that tries to make them both be precharged to an equal, high level before reading from a memory cell (which partially discharges one of the two) & to a sense amplifier (basically a comparator) to read the resulting state afterward. (There may also be another level of gating between the bitlines & the comparators, but that does not strongly affect the dynamics.) In the unmapped space, there are no memory cells, so the controller equalizes the bitlines however well it does, then does nothing (there being no memory cell to discharge them), then senses which bitline is at a higher level. For some column addresses, this will be heavily biased one way or the other because the precharge transistors on the bitlines are too different, while for others, it will be less biased because they are more similar.

Another source of non-randomness distinct from the bias is that the bitlines act like capacitors, so precharging only moves their voltage toward the desired level rather than reaching it. This results in a correlation between reads, with the correlation getting weaker with a longer the time interval between them & with more intervening reads performed. This is why we cannot just use a von Neumann extractor (which only de-biases) & instead have to do something like XORing many consecutive bits together (which does not fully remove the bias & correlation but can lower it to an undetectable level).
Well, as I've mentioned before pretty much everything in HASHLIB is proof-of-concept, and will come with a disclaimer not to transmit anything requiring true security with. That being said, I've been undertaking this project (with modest amounts of assistance from people in the community, and from public domain implementations [If it isn't broke, don't fix it]) to try to master the techniques involved. I know the hardware has its limitations and something that is truly cryptographically secure may not be doable on the calculator, or perhaps not without pooling so much randomness it takes a long time to return anything.

In SAX this morning, beck had asked a question about the R register, and we arrived at a seeding function that returns this as the seed:

(Rx<<1) ^ 0b10000111, ((Ry<<1) + second), (Rx'<<1) ^ 0b10000111), (Ry'<<1) + (second<<2)

It reads the seconds from rtc_Time as well as the refresh counter. Is this sufficient for a seed to use with rand/random?
Could we perhaps up the entrophy by adding a read from volatile memory to the end, or beginning of it?
Running the resulting bytes through a CRC-32, or SHA?

Also, after poking around a bit I learned that rand/random use the Mersenne_Twister algorithm, but I would venture that they do not use the CryptoMT variant of it? Is it safe to use a non-crypto PRNG if the seed itself is truly (or darn close to) random?
If what you want is a CSPRNG, you both need an unpredictable seed & need to use a CSPRNG algorithm—non-cryptographic PRNGs will have a pattern in their values that can be exploited to derive the seed, regardless of how good the seed quality itself is.

The R register is not a good choice for cryptography because someone can figure out how much R changes by between their malicious code & the code that reads it & then set it to get whatever value they want into the seed. Also, reading R multiple times in a row is useless (for cryptographic purposes) because the values will always be offset by the same amount. The same goes for the current time (on top of many people not bothering to set their clock at all or having it disabled).

Pretty much the only plausible source of true randomness on the CE is the unmapped address space, & then only on an actual calculator—CEmu uses a very weak PRNG for it (because they want it to be fast, not accurate). & if you wanted portable code, you would have to determine which addresses are suitable on which revisions (or ideally, find one that works on all of them), then fill a buffer (to some multiple of the number of random bits you want, depending on the measured entropy of the address(es) you picked) & hash it. Again, remember that only certain bits at any given address are random, & that they are correlated in time, so you need a larger multiple than you would expect. For a rough estimate, you might want to read 2048 bytes (from the same address, not consecutively) to get 256 bits of entropy.

CRC-32 should not be used for cryptographic hashing because it is a linear function of its input. Any cryptographic hash such as SHA256 should be suitable.

In general, getting enough entropy is not as simple as throwing a bunch of inputs together. If you want good randomness, you need to ensure nobody can predict or control those inputs, & you also need to ensure that you have enough input to not just be able to brute force it. (Obviously you can use fewer bits if you are just doing something for learning purposes & do not need strong security.)

Also, for cryptography, it is not enough that the output looks random. Especially if you put it through a CSPRNG, because that will look random even if you give it completely predictable input.
Fair enough.

if you wanted portable code, you would have to determine which addresses are suitable on which revisions (or ideally, find one that works on all of them)

Is there any existing documentation on this? Or is this something like what you were doing, where you'd have to leave it running to generate statistical data on it and figure out if any common addresses are viable? What program(s) do you use for that? Maybe I can run some of them on my own calculators to help garner this information? Also, what revision is a L-0115?
Ok, I spent many hours sifting through this, and looking for the data I was supposed to and produced this record. Feel free to look it over and make sure I didn't miss anything, but these charts present the number of bits, out of 10,000 tests, returned 1 for all bytes in range 0xD65800 - 0xD659FF.

The first tab in the sheet presents the byte lists side by side for the revisions. The green highlights any byte with a bit that returned 1 between 4000 and 6000 times out of 10000. The yellow highlights a byte with a bit that returned 1 between 3000 and 7000 out of 10000 times, excluding those on the first list.

So here's a question. If I can find shared sequences of bytes that are usable, could I actually read multiple bytes at once? Like a pointer to a u32_t or 16_t? Or would that affect the randomness?

BECK ran the data through an algorithm of his own and produced the following results:


best case 8s:
UMemDmp-RevpreA.bin :   0xd659a9        29296   avg: 3662.0
UMemDmp-Zeroko-C.bin:   0xd65996        31346   avg: 3918.25
UMemDmp-Zeroko-N.bin:   0xd658c1        29439   avg: 3679.875
UMemDump_L.bin      :   0xd65889        26625   avg: 3328.125

best case 16s:
UMemDmp-RevpreA.bin :   0xd659b8        62582   avg: 3911.375
UMemDmp-Zeroko-C.bin:   0xd659be        66684   avg: 4167.75
UMemDmp-Zeroko-N.bin:   0xd65840        62662   avg: 3916.375
UMemDump_L.bin      :   0xd65888        66087   avg: 4130.4375

best case 24s:
UMemDmp-RevpreA.bin :   0xd659a7        99263   avg: 4135.958333333333
UMemDmp-Zeroko-C.bin:   0xd659b6        103283  avg: 4303.458333333333
UMemDmp-Zeroko-N.bin:   0xd6583f        100405  avg: 4183.541666666667
UMemDump_L.bin      :   0xd65809        97850   avg: 4077.0833333333335

best case 32s:
UMemDmp-RevpreA.bin :   0xd65980        135391  avg: 4230.96875
UMemDmp-Zeroko-C.bin:   0xd659bc        139890  avg: 4371.5625
UMemDmp-Zeroko-N.bin:   0xd65840        140888  avg: 4402.75
UMemDump_L.bin      :   0xd65808        137310  avg: 4290.9375

Global best case 8s:    0xd66548        129319  avg: 2020.609375
Global best case 16s:   0xd664c0        276244  avg: 2158.15625
Global best case 24s:   0xd66538        420484  avg: 2190.0208333333335
Global best case 32s:   0xd664c0        572129  avg: 2234.87890625
Ok, I wrote up some code to generate a CSPRNG. Let me preface this by saying I tried looking up some code for how to do this, and the vast majority of what I found used /dev/random or /dev/urandom, assuming the code was running on a computer. So I decided to write one myself based on what I believe to be sufficiently random, so if I got this wrong, be gentle lol.

The init code merely checks your revision number and sets the eread volatile pointer within the csprng state to the proper address.
The Add Entropy code XORs the existing "entrophy pool" (a 256-byte region of memory) with the value at eread.
The Random code attempts to pull entropy from multiple sources:
1) It declares but doesn't not assign the rand[] array. This should fill it with garbage.
2) It reads from eread 4 times and XORs the result sequentially with rand[]
3) It returns an SHA256 for the "entropy pool", and then XORs rand with the hash in blocks of 4 bytes
The resulting uint32_t is returned, but the function calls AddEntropy before returning to scramble the state.

void hashlib_CSPRNGInit(void){
    // this code will be replaced with
    // code to check for revisions and set the eread byte
    csprng_state.eread = //some address

bool hashlib_CSPRNGAddEntropy(void){
    volatile uint8_t* eread = csprng_state.eread;
    if(eread==NULL) return false;
    for(uint24_t i=0; i < EPOOL_SIZE; i++)
        cpsrng_state.epool[i] ^= *eread;
    return true;

uint32_t hashlib_CSPRNGRandom(void){
    uint8_t rand[4];    // initialize u32. Don't assign a value so it's filled with garbage
    uint8_t hash[32];
    SHA256_CTX ctx;
    volatile uint8_t* eread = csprng_state.eread;
    if(eread==NULL) return 0;
    for(uint8_t i = 0; i < 4; i++)
        rand[i] ^= *eread;  // read *eread 4 times, XORing the value and writing it into each byte of rand[]
    hashlib_Sha256Update(&ctx, &csprng_state.epool, EPOOL_SIZE);
    hashlib_Sha256Final(&ctx, &hash);
    for(uint8_t i; i<32; i+=4) // break the SHA256 into 4-byte segments and XOR it with each block
        (uint32_t)rand ^= (uint32_t)hash[i];
    return (uint32_t)rand;

Is this sufficient? Decent? On the right track? lol

Also new question: The CSPRNGInit() code will actually poll the 512-bytes for a number of times and try to select the byte with the bit(s) closest to being set half the time.

So my question is this: Is it ok to pick a byte where 1 bit is the closest to half, but the rest of the byte is more or less always 0 or equal to the number of polls? Or is it actually better to pick a byte where more than 1 bit may be a bit farther from half than in the first scenario, but more of the byte produces fluctuation? Wouldn't this produce more entropy in the entire stream?
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum