Basically, now that APC is installed, I had assumed that memory tracing would show less of it being taken when doing requires (since it's shared). However, when doing traces using XDebug of the same page (before and after, with multiple reloads), it shows that the memory usage is still the same as it was before hand.
I have confirmed that APC is working by successfully outputting apc_cache_info().
Here is my APC config in PHP.ini
[APC]
; This can be set to 0 to disable APC.
apc.enabled=1
; The number of shared memory segments to allocate for the compiler cache.
apc.shm_segments=1
; The size of each shared memory segment, with M/G suffixe
apc.shm_size=64M
; A "hint" about the number of distinct source files that will be included or
; requested on your web server. Set to zero or omit if you're not sure;
apc.num_files_hint=1024
; Just like num_files_hint, a "hint" about the number of distinct user cache
; variables to store. Set to zero or omit if you're not sure;
apc.user_entries_hint=4096
; The number of seconds a cache entry is allowed to idle in a slot in case this
; cache entry slot is needed by another entry.
apc.ttl=7200
; use the SAPI request start time for TTL
apc.use_request_time=1
; The number of seconds a user cache entry is allowed to idle in a slot in case
; this cache entry slot is needed by another entry.
apc.user_ttl=7200
; The number of seconds that a cache entry may remain on the garbage-collection list.
apc.gc_ttl=3600
; On by default, but can be set to off and used in conjunction with positive
; apc.filters so that files are only cached if matched by a positive filter.
apc.cache_by_default=1
; A comma-separated list of POSIX extended regular expressions.
apc.filters
; The mktemp-style file_mask to pass to the mmap module
apc.mmap_file_mask=c:/apc_cache/apc.XXXXXX
; This file_update_protection setting puts a delay on caching brand new files.
apc.file_update_protection=2
; Setting this enables APC for the CLI version of PHP (Mostly for testing and debugging).
apc.enable_cli=0
; Prevents large files from being cached
apc.max_file_size=1M
; Whether to stat the main script file and the fullpath includes.
apc.stat=1
; Vertification with ctime will avoid problems caused by programs such as svn or rsync by making
; sure inodes havn't changed since the last stat. APC will normally only check mtime.
apc.stat_ctime=0
; Whether to canonicalize paths in stat=0 mode or fall back to stat behaviour
apc.canonicalize=0
; With write_lock enabled, only one process at a time will try to compile an
; uncached script while the other processes will run uncached
apc.write_lock=1
; Logs any scripts that were automatically excluded from being cached due to early/late binding issues.
apc.report_autofilter=0
;This setting is deprecated, and replaced with apc.write_lock, so let's set it to zero.
apc.slam_defense=0
Edit #1
I finally got around to double check...
apc_cache_info() will show:
DEBUG: Array
(
[num_slots] => 2048
[ttl] => 7200
[num_hits] => 4313
[num_misses] => 124
[start_time] => 1334152023
[expunges] => 0
[mem_size] => 21653480
[num_entries] => 119
[num_inserts] => 124
[file_upload_progress] => 1
[memory_type] => IPC shared
[locking_type] => file
[cache_list] => Array
(
[0] => Array
(
[filename] => ...
phpinfo() will show:
apc
APC Support enabled
Version 3.0.15-dev
MMAP Support Disabled
Locking type File Locks
Revision $Revision: 3.145 $
Build Date May 31 2007 09:39:25
Can headers be sent that would change the behavior of APC? (ie. to always get a fresh copy of the web page?)
Edit #2
I even increased the apc.max_file_size setting to 5M, just to see if that was the issue. Same thing. Requires are still using the same amount of RAM between reloads.
I also noticed that [num_hits] for files does increase.
Edit #3
From my trace file:
0.0740 6152208 +492816 -> require(E:\my_require.php) E:\main.php:60
Shouldn't the memory delta show a considerably lower value on subequent passes, once it has been cached?
Edit #3
From my trace file:
0.0740 6152208 +492816 -> require(E:\my_require.php) E:\main.php:60
Shouldn't the memory delta show a considerably lower value on subequent passes, once it has been cached?
yes it should
also i thought that using a cache engine would make it so that it would cache and share the memory limiting the amount of memory used for the scripts