cvs.gedasymbols.org/archives/browse.cgi   search  
Mail Archives: djgpp/1999/08/22/11:41:48

Date: Sun, 22 Aug 1999 13:12:31 +0300 (IDT)
From: Eli Zaretskii <eliz AT is DOT elta DOT co DOT il>
X-Sender: eliz AT is
To: G <msk11 AT cornell DOT edu>
cc: djgpp AT delorie DOT com
Subject: Re: Possible Bug in realloc()
In-Reply-To: <7pa8ti$73k@newsstand.cit.cornell.edu>
Message-ID: <Pine.SUN.3.91.990822131148.6405Y-100000@is>
MIME-Version: 1.0
Reply-To: djgpp AT delorie DOT com
X-Mailing-List: djgpp AT delorie DOT com
X-Unsubscribes-To: listserv AT delorie DOT com

On Mon, 16 Aug 1999, G wrote:

> newptr= (char *)malloc(size);
> .
> .
> .
> memcpy(newptr, ptr, copysize);
> free(ptr);
> return newptr;
> 
>    It appears to me that when malloc is unable to allocate the requested
> amount of memory and returns NULL, data from the old block of memory(whose
> address is ptr) is copied to offset 0.

This is indeed a bug, thanks.  It is now corrected in the development
sources, and the fixed version of `realloc' will be in DJGPP v2.03.

>    Let's say you have 600 total units of space on the computer.
>    Say 300 units are allocated in 1 large block and the subsequent 300 are
> free.
>    If you want to enlarge the first block to 500 units, a "smart" realloc()
> could just enlarge the 300 unit block to 500 units if no block were already
> using the 200 units of space immediately following the 300 unit block.
> Malloc(), on the other hand would only see 300 free units(not knowing the
> 300 units already used could be part of the necessary 500) and so would deny
> the request for a 500 unit block whereas a "smart" reallloc could grant it.

It might be possible to make such an optimization, thanks for the
suggestion.  We might consider this for DJGPP v2.04 (v2.03 is a
bugfix-only release, so it would be unwise to introduce changes in
core functions such as `realloc'.

Please also note that, given the availability of virtual memory,
and the general abundance of memory on today's machines, the fact that
`realloc' doesn't perform this optimization is of no practical
importance in most cases, because it is very rare to see a program to
exhaust all of the available virtual memory.  So in practice, you will
only see such problems in programs that allocate many Megabytes in a
single chunk, and the price paid is usually the time it takes to copy
the data from the old buffer to the new one, whereas the optimized
version would not need to copy.

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019