To give you control to make appropriate decisions based on the scenario. There are many hash functions you could write that would take hardly any time at all to execute, but which a compiler might not be smart enough to figure out.
So make a hash table that maps whatever you want to functions - bam, done.
No. A switch, abstractly, is a multiway branch, in some cases the compiler doesn't know how to implement as such concretely, and must then fall back on sequential branching.
Abstractly, yes, but in reality it's just a series of if/else ifs because CPUs don't have a "switch" style instruction.
because nobody ever writes switches with a large number of cases?
People write crappy code all the time, doesn't mean you should make it easier for them to write crappy code.
O RLY. I'm pretty sure big O notation doesn't assume an "ideal machine", just "for n large enough". Cases where O() really would be misleading for reasonable n (1.0000000001^n vs n^100000) pretty much don't show up.
No, they assume there's no such thing as caches, branch prediction, and that all operations magically take the exact same amount of time.
False. There are a good number of cases where the compiler can fall back on a O(log(n)) tree jump table before it has to fall back to sequential access.
I cannot find a shred of evidence of this hypothetical tree jump table, care to actually back that claim up with anything at all?
I'm not sure how this is relevant. Python int's are still well-ordered.
Because you can override equals, and more importantly you can replace equals on specific instances of int. You can't do jump tables if you don't know how the compare works.
We can keep arguing if you want, but it's not terribly likely you'll convince me to side with Guido van Rossum over Donald Knuth and Peter Naur.
Donald Knuth and Peter Naur? Where have they published their feelings on switches, and Python's lack of them?