Hi all,

As a side project, I am porting the rand* commands on TI-84 PLUS (rand, randInt, randBin, randIntNoRep, randM and randNorm) to Python. This will enable you to generate random numbers on your PC that are the same as the ones the calculator gives.

For example, to make groups, my teacher asks me a seed for rand, gives us all a number and then divides groups with randInt. I would like to generate a seed so I am together in a group with a friend of mine. In order to do that, I want to bruteforce a seed that sets us together on my PC.

So I started with porting the rand command to python. Thanks Richard. I then ported randInt and randBin (just for completeness). I am now stuck at randIntNoRep. Can anyone post an implementation of it in TI-BASIC or another PL? It will need to ouput the same numbers as the original randIntNoRep command (assuming you are using the same seed).

In case you are curious, here is the code I already have.

I added support for randM. An implementation of randIntNoRep is still wanted
I haven't looked into the OS to see how it implements randInNoRep, but have you tried the equivalent of the following?

Code:
vals = range(start, end)
dummy = [yourlib.rand() for i in range(start, end)]
dummy, vals = (list(t) for t in zip(*sorted(zip(dummy, vals))))
Is this the same as the implementation on http://tibasicdev.wikidot.com/randintnorep ? Tried that, it didn't work. (it worked but gave other values).

Code:
rand(52→L₂
seq(X,X,0,51→L₁
SortA(L₂,L₁
Can you not just use random? For example:

Code:

import random

random.seed([x])   //you could find a certain seed that would put you and your friend together? Just manipulate it in some way

random.randint(a,b)   //a random integer between a and b

random.uniform(a,b)   //basically the same as rand - gives you a float between a and b

obviously there's more in the library.

Why do you need your own library if there's random?
Michael2_3B wrote:
Why do you need your own library if there's random?
The PRNG underlying Python's random numbers is not the same as the one that TI-BASIC uses. He wants to be able to get the same random numbers from the same seed that he would get on a calculator.

redfast00 wrote:
Is this the same as the implementation on http://tibasicdev.wikidot.com/randintnorep ? Tried that, it didn't work. (it worked but gave other values).

Code:
rand(52→L₂
seq(X,X,0,51→L₁
SortA(L₂,L₁
Yes, it's essentially the same as that, but in Python.
Do we know where the OS PRNG is located?
The algorithm is documented in many places, here is an example.
@CVSoft
I already have the algorithm for rand, and then derived randInt, randBin, randM. The problem is that I don't know how the algorithm for shuffeling (randIntNoRep) works.
@elfprince13
What (free & opensource) tools would one use to reverse engineer this?
An emulator (e.g. jsTIfied, WabbitEmu) and/or https://github.com/drdnar/eZDisasm are important tools for reverse engineering.
Resurrecting this topic because I've found that although the known algorithm generates correct results in general, I also want to understand the rounding behavior of EOS because it seems that the guard digits (which the calculator normally never displays) differ from what I get with implementations that have greater precision.

To begin, I threw together this TI-BASIC program to display all 14 digits of a number (including the 4 guard digits) from Ans, and verified the results by checking the contents of Ans in memory using a debugger:
Code:
// Input: Ans: real number
// Output: displays exponent and 14 digit mantissa
Ans->A
// ~int(~x) == ceiling(x)
~int(~log(A->N
Disp "EXPONENT",N
// Normalize so the number has no integer part, only fractional
A10^(~N->A
// Concatenating an empty string with something else is ERR: INVALID DIM!
"  ->Str1
// Iteratively pull digits off the front
For(D,0,13
   10A->A
   int(A->B
   A-B->A
    Str1+sub("0123456789",B+1,1->Str1
End
Disp "MANTISSA",Str1


When I start with the default seed on a calculator (0->rand), the digits of the first result are 94359740249213. Now compare with the results of this Python implementation, still using decimal floats and arbitrary precision:

Code:
from decimal import Decimal

mod1 = Decimal(2147483563)
mod2 = Decimal(2147483399)
mult1 = Decimal(40014)
mult2 = Decimal(40692)
seed1 = Decimal(12345)
seed2 = Decimal(67890)


def seed(n):
    global seed1, seed2
    n = Decimal(n)
    if n < 0:
        n = -n
    if n == 0:
        seed1 = Decimal(12345)
        seed2 = Decimal(67890)
    else:
        seed1 = (mult1 * n) % mod1
        seed2 = n % mod2


def rand():
    global seed1, seed2
    seed1 = (seed1 * mult1) % mod1
    seed2 = (seed2 * mult2) % mod2
    result = (seed1 - seed2) / mod1
    if result < 0:
        result = result + 1
    return result


This yields Decimal('0.9435974025194436377625489653'), which differs from the calculator beginning in the 10th digit (which rounds to the same value as displayed by a calculator).

Where things get a bit weird is that by inspecting the algorithm, all of the operations are integer arithmetic except the division to compute result so there doesn't seem to be much room for EOS to introduce rounding error. Possibly the integer multiplications could get large enough that they get rounded off, but offhand I doubt that's actually possible.

So does anybody else have ideas of where the error creeps in? Bonus points if you can express it in Python to get the Python version to yield the exact same result as EOS returns; I hope it might just be doing a x.round(10) or something in the right place!
I don't really want to try to completely reverse-engineer the romcall for generating random numbers, so I ran through evaluating rand in an emulator with watchpoints set on the OP pseudo-registers while watching their values. At various times I saw the expected default seed values (12345 and 67890) and several of the expected intermediate values seed1*mult1=493972830 and (seed2*mult2)%mod2=615096481.

At one point I noticed a value that differed ever so slightly from the documented values: I saw 2147483562 which is one less than the documented value for mod1. Just modifying the code to use that value for mod1 yields a result closer to what EOS returns, but still not perfectly accurate: I get 0.9435974024931791305604415146, which now differs in the 12th digit. Possibly TI's programmers slightly modified the constants used in L'Ecuyer's algorithm as a kind of trapdoor to detect implementations that copy theirs? (Although based on the description of section 4 of the paper, modifying the constants could negatively impact the quality of random numbers.)

I also saw the value 0.230025340985317 fly past at one point, which got the last digit rounded off (..317 became ..320). That doesn't seem to correlate very well with any values that actually appear in the algorithm, but observing that round-off might provide evidence for other rounding that I'm not currently accounting for.
I was reviewing the paper in more detail because I wasn't making much progress continuing to use a debugger, and realized that the algorithm detailed in Figure 3 includes many of the mystery constants that I saw moving through the OP registers (including 53668, 12211, 52774, 2147483562 and something with mantissa 4656613059555)!

That algorithm transcribed (it's apparently written in Pascal):
Code:
FUNCTION Uniform : REAL;
  VAR
     Z, k : INTEGER;
  BEGIN
  k  := s1 DIV 53668;
  s1 := 40014 * (s1 - k * 53668) - k * 12211;
  IF s1 < 0 THEN s1 := s1 + 2147483563;

  k   := s2 DIV 52774;
  s2 := 40692 * (s2 - k * 52774) - k * 3791;
  IF s2 < 0 THEN s2 := s2 + 2147483399;

  Z := s1 - s2;
  IF Z < 1 THEN Z := Z + 2147483562;

  Uniform := Z * 4.656613E-10
END


A quick reimplementation of this in Python didn't yield results at all like what I expect, but I'm confident this does accurately reflect what EOS does and the weird result I got is just an issue with my implementation.

Updated: I learned that DIV in Pascal is integer division (after I noticed that it didn't make sense to be taking seed1 / 53668 then multiplying that by 53668 again), and this Python version now yields a precise result when compared with EOS:

Code:
from decimal import Decimal

def rand():
    global seed1, seed2
    k = seed1 // 53668
    seed1 = 40014 * (seed1 - k * 53668) - k * 12211
    if seed1 < 0:
        seed1 += 2147483563

    k = seed2 // 52774
    seed2 = 40692 * (seed2 - k * 52774) - k * 3791
    if seed2 < 0:
        seed2 += 2147483399

    z = seed1 - seed2
    if z < 1:
        z += 2147483562
    return z * Decimal('4.656613059555e-10')

The first value returned by this implementation is 0.9435974024921307499605, which rounds to the value 0.94359740249213 that EOS generates!

(This post's Python code was corrected after Kerm pointed out an error downthread where I incorrectly used 3791 as a coefficient while updating the value of seed1, instead of 12211.)
A few bonus comments:

TI's constant for multiplying z is close to 2147483562⁻¹ (which would be the right choice to uniformly distribute output values given the range of z), but is about 5e-12 larger than the actual value of 2147483562⁻¹. Given the calculator has the precision to match to within 5e-15, it's not clear to me why it differs.

The choice of L'Ecuyer's algorithm is rather silly, given it's actually implemented with floating-point arithmetic but the algorithm is designed to be efficiently implemented with mostly 32-bit integer operations. Possibly they used this algorithm to match older calculators that did implement it with integer arithmetic, though I also feel a good calculator RNG could simply generate 14 digits in the range 0-9 to yield a value in the half-open range [0,1) and that could be much simpler.

It seems likely that this algorithm was used since at least the TI-81 (introduced in 1990) so the implementation in EOS probably does predate algorithms that are now more common and have much longer periods such as the Mersenne Twister (though MT needs quite a lot of state and the TinyMT variant that only needs 16 bytes of state wasn't proposed until 2011).
The puzzles continued once I tried testing my implementation with a large value input to the function to set a seed. The paper is no help here because they don't suggest any particular way to choose a seed, so I had to reverse-engineer it completely.

My implementation of TI floating point enforces numeric limits in the same way as TI's, so the straightforward version of the seed function above triggers an overflow error when the input value is sufficiently large (in an easy example, 1099 multiplied by 40014 is 4.0014×10104 which has an exponent that is too large for TI's float format). In addition, taking the modulus of a large number like 1099 (with an exponent larger than 13, so the gap between adjacent representable values is larger than 1) with 2147483563 is impossible to do accurately. The underlying library I'm using correctly returns NaN in that case to indicate that the result cannot be represented, but clearly EOS does something different that I needed to reverse-engineer.

As it turns out, the algorithm goes like this:
  1. Let S be the user-provided seed value.
  2. Set the sign of S to positive and clamp its exponent +8. Drop any fractional part, yielding a value with no more than 9 digits in its integer part. For example, 1099 becomes 100000000 and 9×1011-1 becomes 999999999.
  3. Let A = ⌊S / 53668⌋
  4. Let B = A × 53668
  5. Let C = S - B. This means C is equivalent to S mod 53668.
  6. Let D = C × 40014.
  7. Let E = A × 12211
  8. Set seed1 to D - E
  9. Set seed2 to S

I haven't yet translated this to code and run thorough tests on it, but I'm pretty confident in its correctness. The trick is mainly in realizing that it limits the exponent of the provided value, so values will never exceed mod1 or mod2 and those values become irrelevant.
It turned out that my Python implementation doesn't update the seeds in quite the correct way when generating a new number, because after the first number its results diverge from EOS. My Rust implementation does match however, so I think the issue is just a little bit of sloppiness with data types in Python.

I also found one missing piece in my previous post's description of the algorithm: when setting seed1, the value of D - E is an integer modulo 2147483563, so if the result of that subtraction is negative then 2147483563 should be added to it to get the value of seed1.

My Rust implementation which isn't completely self-contained, where d128 is a 128-bit IEEE 754 decimal float and Float is a wrapper around a pair of d128 that implements TI-float semantics (maintaining equivalent precision, overflow behavior and so forth):

Code:
/// Generates random real numbers.
pub struct RandomNumberGenerator {
    seed1: i32,
    seed2: i32,
}

impl Default for RandomNumberGenerator {
    fn default() -> Self {
        RandomNumberGenerator {
            seed1: 12345,
            seed2: 67890,
        }
    }
}

impl RandomNumberGenerator {
    const MOD1: i32 = 2147483563;
    const MOD2: i32 = 2147483399;

    /// Seed the random number generator.
    ///
    /// Returns a `DataType` error if the provided value is not a real number, but otherwise never
    /// fails.
    pub fn set_seed(&mut self, mut seed: Float) -> Result<(), EvalError> {
        if !seed.is_real() {
            return Err(EvalError::DataType);
        }
        if seed.is_zero() {
            *self = Default::default();
            return Ok(());
        }

        // Clamp exponent to maximum of +8
        let exp = seed.real_exponent() as i32;
        if exp > 8 {
            use decimal::d128;

            seed = Float::real(
                seed.to_real_as::<d128>()
                    .expect("Seed is checked for realness")
                    .scaleb(d128::from(8 - exp)),
            )
            .expect("Result from scaleb is always still a valid Float");
        }
        // Truncate fractional part (if any) and ensure value is positive
        let seed = seed
            .abs()
            .truncate()
            .to_real_as::<i32>()
            .expect("Scaled seed should always be a valid i32");

        let x = ((seed % 53668) * 40014) - ((seed / 53668) * 12211);
        self.seed1 = if x >= 0 { x } else { x + Self::MOD1 };
        self.seed2 = seed;

        Ok(())
    }

    /// Generate a random number in the half-open range [0,1).
    pub fn next_random(&mut self) -> Float {
        let k = self.seed1 / 53668;
        self.seed1 = 40014 * (self.seed1 - k * 53668) - k * 12211;
        if self.seed1 < 0 {
            self.seed1 += Self::MOD1;
        }

        let k = self.seed2 / 52774;
        self.seed2 = 40692 * (self.seed2 - k * 52774) - k * 3791;
        if self.seed2 < 0 {
            self.seed2 += Self::MOD2;
        }

        let mut z = self.seed1 - self.seed2;
        if z < 1 {
            z += 2147483562;
        }

        (Float::from(d128!(4.656613059555e-10)) * z).expect("RNG evaluation should never fail")
    }
}
For fun, I tried changing your Python implementation to store the seeds in a numpy int32 array, and received the same (incorrect) second value as before that change from seeding with 0:

Code:
0.9435974024921307499605
0.9444065318520685791165


I looked closely, and it seems you have a copy-and-paste or transposition error: your Python implementation has seed1 = 40014 * (seed1 - k * 53668) - k * 3791, but your Rust implementation has self.seed1 = 40014 * (self.seed1 - k * 53668) - k * 12211;. Correcting that error produced the correct series of values:


Code:
0.9435974024921307499605
0.9083188609747471458765



Code:
#!/usr/bin/env python3

from decimal import Decimal
import numpy as np

MOD1 = 2147483563;
MOD2 = 2147483399;
   
class Rand():
   def __init__(self, seed = 0):
      self.seeds = np.array([12345, 67890], dtype='int32')
      if 0 != seed:
         raise NotImplementedError("Can't set a non-zero seed yet")

   def rand(self):
      k = self.seeds[0] // 53668
      self.seeds[0] = 40014 * (self.seeds[0] - k * 53668) - k * 12211
      if self.seeds[0] < 0:
         self.seeds[0] += 2147483563

      k = self.seeds[1] // 52774
      self.seeds[1] = 40692 * (self.seeds[1] - k * 52774) - k * 3791
      if self.seeds[1] < 0:
         self.seeds[1] += 2147483399
      
      z = self.seeds[0] - self.seeds[1]
      if z < 1:
         z += 2147483562
      return z * Decimal('4.656613059555e-10')

rand = Rand()
print(rand.rand())
print(rand.rand())
Ah, good catch! That does look like it was a copy-paste error where I copied the expression for seed2 and forgot to update that last constant.

Since I think I've now nailed down the implementation questions to achieve precise results in all situations, I've dropped a comment on the StackOverflow discussion with my findings, since I'm unaware of anybody else having documented the precise constant that TI use here and how it differs from the one suggested by L'Ecuyer.

A puzzle we'll probably never solve is how it was learned that this algorithm is the correct one. The earliest reference I've seen is thornahawk and DarkerLine posting on United-TI in 2007. The claim on TI|BD relating to L'Ecuyer's algorithm was added by DarkerLine according to the page history, but sadly there doesn't seem to be any information on how they determined that.
Tari wrote:
A puzzle we'll probably never solve is how it was learned that this algorithm is the correct one. The earliest reference I've seen is thornahawk and DarkerLine posting on United-TI in 2007. The claim on TI|BD relating to L'Ecuyer's algorithm was added by DarkerLine according to the page history, but sadly there doesn't seem to be any information on how they determined that.


I believe this is a nugget of information TI-Cares is willing to drop. I think they even give the full citation of L'Ecuyer's algorithm. Whether they were willing to provide this information back in 2007... I don't know. Regardless, the constants are pretty recognizable and the output is clearly not generated by some basic methods (LFSR, LCG, etc). The constants are quite prominent there in the code and in the paper; it is also conceivable that someone quickly checked papers regarding what is probably no more than a handful of possible algorithms available to TI in the 80s and 90s.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement