Quote:
AngleWyrm said:
Quote:
Will said:
And these days, it's up to the compiler to do profiling optimizations, not the programmer. The only optimization really left to programmers now are macro-level... i.e. choosing a O(n log n) algorithm over an O(n^2) algorithm.
|
Compilers can do a lot, but the coder should look at a profile as well. In one project (Hat random container), when I profiled the code, I found out that it was spending a lot of time in one particular function call (the update_weights function). With this information, I was able to get a good speed boost by moving one locally constructed variable into a class member position.
|
Yes, but that's still what I would call "macro-level". You're getting the speed boost because instead of calculating a value that doesn't change much n times, you store it in a higher scope and calculate it fewer (one?) times. That's an algorithmic choice, much like choosing mergesort over bubblesort is, only with less dramatic results.
Even then, compilers will do a lot of that within function or method blocks. It can't be done like you were saying up to the class-level, because then optimizations would be changing the data-structure of an object, which breaks compatibility with un-optimized code. But anything that is in a self-contained block with an immutable interface to other code the compiler will optimize better than a human programmer would. So, it's best not to worry about those and focus instead on building good algorithms and data structures.
--edit: and damn! I wish the people I work with commented their code thoroughly like you did. Just a note, you may want to split hat.h into hat.h and hat.cpp, that way if you ever make a change to the methods that don't affect the interface, you don't need to recompile everything that #includes hat.h. It's a habit best learned early, lest you need to recompile and relink thousands of source files instead of just relinking thousands and recompiling one
