The randomness of the random number generator isn't the issue. Rather, the issue is that there are many subtle variations on the basic shuffling algorithm, and the consequences of these variations are not clear from a cursory examination of the algorithms' outputs. The canonical shuffling algorithm is called the Fisher-Yates shuffle, or the Knuth shuffle, and it produces uniformly-distributed permutaitons. By changing ONE character in the code for the Knuth shuffle, you get Sattolo's Algorithm, which also seems to produce random permutations but in fact produces uniformly-distributed *cycles* over the array elements. There are only (n-1)! possible cycles, whereas there are n! permutations.

Another one-character change to the Knuth shuffle produces an even subtler bug -- the new algorithm can produce all permutations, but it doesn't produce them with equal probability. The reason is that it makes n^n distinct swaps, whereas the number of permutations is n!. n^n is never divisible by n! for n > 2, so some sequences must be produced with greater probability than others. If your test data is large enough (e.g. 500 elements) the difference between the outputs of these algorithms will be almost indistinguishable.

It worries me that an SDD teacher happily puts a question like this on an exam paper without considering these issues :'(

And to answer your question, an HSC marker will perform an informal desk check on your algorithms. If your algorithm is too complex or appears to be wrong on first glance, they might mark it down. Try to make your algorithms as simple and intuitive as possible. Comment your code heavily if you're concerned that it's difficult to understand.