I wish to create a "randomness", or at least a large amount of entropy given certain data sets. The outcome must be predictable/constant, though, based on various factors, what datasets are used, etc. So I'm not looking to just read from random/urandom. I need this to be completely reproducible. I just want the spread to be varied enough across datasets using fairly limited key sizes. I suppose a rough idea:
[Key] --> [DatasetA] --> [Series of reproducible numbers]. Numbers that will significantly change based on a small modification to the key and any additional variable. Please note I'm trying to avoid just hashing as my requirement demands a lookup of datasets from the that will still simulate randomness without being random. It's procedurally-generated, but reproducible. I would prefer it be implementable in C as I want to use it for a (non-school) project. It's a hobby, but essentially I want to in the end be able to know certain criteria will produce very very varied results. Everything must be self-contained (no external dependence, just within the code and datasets, and keys).
If I'm looking at this from the wrong perspective, I'm open to other suggestions, but obviously it'd be preferable to not have to write the rest of the code-base from scratch.
I've already tried several "made-up" "algorithms" based on key size and contents and dataset contents. I'm not getting sufficient entropy, though.
[b]Update:[/b] Thanks to Severrin's answer, I was able to get a baseline for my key. After some tweaking by using the found piece of data and they using it to seed the whole process over, it's working. My "rooms" are very nicely varied and I can still reproduce them. I supposed it took just out of the box thinking on my part as well.
Thanks everyone,
- Nate
Ok, here is simple Linear Congruential Generator for 64bit -> 64bit full period mapping, together with inverse function.
And here is 128bit LCG