Mail Archives: djgpp/1998/04/04/01:26:00
At 12:58 4/3/1998 +0200, G DOT DegliEsposti AT ads DOT it wrote:
>>I understand that Optimization produces code that the debugger has
>>problems with for various reasons but, if -O3 produces such fast
>This is not completely true: gdb can debug programs compiled with
>optimization options. IIRC some problems can come up when the optimizer
>swaps the order of some instructions and you can get confused because
>it seems like some instruction are skipped, while they are only
>executed in a different order.
It is quite possible that some pieces of code get deleted entirely. A
trivial example:
int a;
a = 3;
a = 4;
GCC will not generate code for the line `a = 3', since that value is never
used. Trying to set a breakpoint on that line would not work.
Also, variables can be optimized entirely out of existence. Another trivial
example:
int f()
{
int a;
a = 4;
return a * 3;
}
GCC can figure out that this is the same as `int f() { return 12; }', and
will generate equivalent code. The variable `a' will not exist, and `print
a' will yield something like "No such variable".
>If this can explain why use -O2 in stead of -O3, I agree
>with you on one point: why -O1 in stead of -O2 ?
>Maybe the switch is available for backward compatibility ?
First of all, `-O3' is just `-O2' plus function inlining. Function inlining
is frequently a bad thing, especially for application compiles. It can make
the code much larger, defeating the caching mechanisms. Things like loop
unrolling are so rarely helpful that no `-Ox' option turns them on; you must
do it explicitly.
The reason for `-O1' is that `-O2' increases compile time significantly for
a less-than-equivalent improvement in the code. This is less of an issue on
today's fast machines, but on a slow machine, could mean the difference
between your compile taking, say, 1 hour and 2 hours.
Nate Eldredge
eldredge AT ap DOT net
- Raw text -