Home  »  Htaccess  »  Securing php.ini and php.cgi with .htaccess

by 2 comments

To execute CGI scripts, a Web server must be able to access the interpreter used for that script. But what if you directly request or If either show up thats a major problem, try it on your site.

.htaccess Solution

The solution is that when you request /index.php, Apache or whatever server you are using does a subrequest/internal request to the php interpreter at /cgi-bin/php.cgi, and when it does an internal request like that it adds some special environment variables that are normal variables prefixed with a REDIRECT_.

We only want internal/sub redirected requests to be allowed to access /cgi-bin/php.ini and /cgi-bin/php.cgi, and .htaccess provides several methods to achieve this type of access control.

Only allow if REDIRECT_STATUS is set

By using the AddHandler and Action directives below, we are setting up Apache to automatically set the REDIRECT_STATUS (also PATH_TRANSLATED which is important for suEXEC among other things).

AddHandler php-cgi .php
Action php-cgi /cgi-bin/php.cgi

Using access control

Since we now know that we only want requests that have the REDIRECT_STATUS environment variable set, we can issue a 403 Forbidden to anything else. You can place this in your /cgi-bin/.htaccess file.

Order Deny,Allow
Deny from All
Allow from env=REDIRECT_STATUS

Combine with FilesMatch

This can go in your /.htaccess file and uses regex to apply to php[0-9]\.(ini|cgi)

<FilesMatch "^php5?\.(ini|cgi)$">
Order Deny,Allow
Deny from All
Allow from env=REDIRECT_STATUS

Only allowing for REDIRECT_STATUS=200

You may also use mod_rewrite's power to further tighten the access by only allowing for redirects with a 200 Status code. This could come into play if your default ErrorDocuments are themselves php scripts. An ErrorDocument 403 /error.php will have a REDIRECT_STATUS of 403.

ErrorDocument 403 /error.php
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_URI} ^.*\.(php|cgi)$
RewriteCond %{ENV:REDIRECT_STATUS} !200
RewriteRule .* - [F]

PHP Security Documentation

CGI-BIN security

Using PHP as a CGI binary is an option for setups that for some reason do not wish to integrate PHP as a module into server software (like Apache), or will use PHP with different kinds of CGI wrappers to create safe chroot and setuid environments for scripts. This setup usually involves installing executable PHP binary to the web server cgi-bin directory.

Apache's Solution

Each new variable will have the prefix REDIRECT_. REDIRECT_ environment variables are created from the CGI environment variables which existed prior to the redirect, they are renamed with a REDIRECT_ prefix, i.e., HTTP_USER_AGENT becomes REDIRECT_HTTP_USER_AGENT. In addition to these new variables, Apache will define REDIRECT_URL and REDIRECT_STATUS to help the script trace its origin. Both the original URL and the URL being redirected to can be logged in the access log.

suEXEC Safe Variables list

suEXEC support

The suEXEC feature provides Apache users the ability to run CGI and SSI programs under user IDs different from the user ID of the calling web server. Normally, when a CGI or SSI program executes, it runs as the same user who is running the web server.Used properly, this feature can reduce considerably the security risks involved with allowing users to develop and run private CGI or SSI programs. However, if suEXEC is improperly configured, it can cause any number of problems and possibly create new holes in your computer's security. If you aren't familiar with managing setuid root programs and the security issues they present, we highly recommend that you not consider using suEXEC.

From suexec.c.

static const char *const safe_env_lst[] =
    /* variable name starts with */
    /* variable name is */

CERT Advisory

Many sites that maintain a Web server support CGI programs. Often these programs are scripts that are run by general-purpose interpreters, such as /bin/sh or PERL. If the interpreters are located in the CGI bin directory along with the associated scripts, intruders can access the interpreters directly and arrange to execute arbitrary commands on the Web server system.All programs in the CGI bin directory can be executed with arbitrary arguments, so it is important to carefully design the programs to permit only the intended actions regardless of what arguments are used. This is difficult enough in general, but is a special problem for general-purpose interpreters since they are designed to execute arbitrary programs based on their arguments. *All* programs in the CGI bin directory must be evaluated carefully, even relatively limited programs such as gnu-tar and find.

Impact and Solution

If general-purpose interpreters are accessible in a Web server's CGI bin directory, then a remote user can execute any command the interpreters can execute on that server.The solution to this problem is to ensure that the CGI bin directory does not include any general-purpose interpreters, for example: PERL, Tcl, UNIX shells (sh, csh, ksh, etc.)

Apache Nuts and Bolts

If you really want the details, start with modules/http/http_request.c of the apache source code.

AP_DECLARE(void) ap_die(int type, request_rec *r)
int error_index = ap_index_of_response(type);
char *custom_response = ap_response_code_string(r, error_index);
int recursive_error = 0;
request_rec *r_1st_err = r;
if (type == AP_FILTER_ERROR) {
if (type == DONE) {
* The following takes care of Apache redirects to custom response URLs
* Note that if we are already dealing with the response to some other
* error condition, we just report on the original error, and give up on
* any attempt to handle the other thing "intelligently"...
if (r->status != HTTP_OK) {
recursive_error = type;
while (r_1st_err->prev && (r_1st_err->prev->status != HTTP_OK))
r_1st_err = r_1st_err->prev;  /* Get back to original error */
if (r_1st_err != r) {
/* The recursive error was caused by an ErrorDocument specifying
* an internal redirect to a bad URI.  ap_internal_redirect has
* changed the filter chains to point to the ErrorDocument's
* request_rec.  Back out those changes so we can safely use the
* original failing request_rec to send the canned error message.
* ap_send_error_response gets rid of existing resource filters
* on the output side, so we can skip those.
update_r_in_filters(r_1st_err->proto_output_filters, r, r_1st_err);
update_r_in_filters(r_1st_err->input_filters, r, r_1st_err);
custom_response = NULL; /* Do NOT retry the custom thing! */
r->status = type;
* This test is done here so that none of the auth modules needs to know
* about proxy authentication.  They treat it like normal auth, and then
* we tweak the status.
if (HTTP_UNAUTHORIZED == r->status && PROXYREQ_PROXY == r->proxyreq) {
/* If we don't want to keep the connection, make sure we mark that the
* connection is not eligible for keepalive.  If we want to keep the
* connection, be sure that the request body (if any) has been read.
if (ap_status_drops_connection(r->status)) {
r->connection->keepalive = AP_CONN_CLOSE;
* Two types of custom redirects --- plain text, and URLs. Plain text has
* a leading '"', so the URL code, here, is triggered on its absence
if (custom_response && custom_response[0] != '"') {
if (ap_is_url(custom_response)) {
* The URL isn't local, so lets drop through the rest of this
* apache code, and continue with the usual REDIRECT handler.
* But note that the client will ultimately see the wrong
* status...
apr_table_setn(r->headers_out, "Location", custom_response);
else if (custom_response[0] == '/') {
const char *error_notes;
r->no_local_copy = 1;       /* Do NOT send HTTP_NOT_MODIFIED for
* error documents! */
* This redirect needs to be a GET no matter what the original
* method was.
apr_table_setn(r->subprocess_env, "REQUEST_METHOD", r->method);
* Provide a special method for modules to communicate
* more informative (than the plain canned) messages to us.
* Propagate them to ErrorDocuments via the ERROR_NOTES variable:
if ((error_notes = apr_table_get(r->notes,
"error-notes")) != NULL) {
apr_table_setn(r->subprocess_env, "ERROR_NOTES", error_notes);
r->method = apr_pstrdup(r->pool, "GET");
r->method_number = M_GET;
ap_internal_redirect(custom_response, r);
else {
* Dumb user has given us a bad url to redirect to --- fake up
* dying with a recursive server error...
ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
"Invalid error redirection directive: %s",
ap_send_error_response(r_1st_err, recursive_error);
static apr_table_t *
(apr_pool_t *p, apr_table_t *t)
const apr_array_header_t *env_arr = apr_table_elts(t);
const apr_table_entry_t *elts = (const apr_table_entry_t *) env_arr->elts;
apr_table_t *new = apr_table_make(p, env_arr->nalloc);
int i;
for (i = 0; i < env_arr->nelts; ++i) {
if (!elts[i].key)
apr_table_setn(new, apr_pstrcat(p, "REDIRECT_", elts[i].key, NULL),
return new;
static request_rec *internal_internal_redirect(const char *new_uri,
request_rec *r) {
int access_status;
request_rec *new;
if (ap_is_recursion_limit_exceeded(r)) {
return NULL;
new = (request_rec *) apr_pcalloc(r->pool, sizeof(request_rec));
new->connection = r->connection;
new->server     = r->server;
new->pool       = r->pool;
* A whole lot of this really ought to be shared with http_protocol.c...
* another missing cleanup.  It's particularly inappropriate to be
* setting header_only, etc., here.
new->method          = r->method;
new->method_number   = r->method_number;
new->allowed_methods = ap_make_method_list(new->pool, 2);
ap_parse_uri(new, new_uri);
new->request_config = ap_create_request_config(r->pool);
new->per_dir_config = r->server->lookup_defaults;
new->prev = r;
r->next   = new;
/* Must have prev and next pointers set before calling create_request
* hook.
/* Inherit the rest of the protocol info... */
new->the_request = r->the_request;
new->allowed         = r->allowed;
new->status          = r->status;
new->assbackwards    = r->assbackwards;
new->header_only     = r->header_only;
new->protocol        = r->protocol;
new->proto_num       = r->proto_num;
new->hostname        = r->hostname;
new->request_time    = r->request_time;
new->main            = r->main;
new->headers_in      = r->headers_in;
new->headers_out     = apr_table_make(r->pool, 12);
new->err_headers_out = r->err_headers_out;
new->subprocess_env  = rename_original_env(r->pool, r->subprocess_env);
new->notes           = apr_table_make(r->pool, 5);
new->allowed_methods = ap_make_method_list(new->pool, 2);
new->htaccess        = r->htaccess;
new->no_cache        = r->no_cache;
new->expecting_100   = r->expecting_100;
new->no_local_copy   = r->no_local_copy;
new->read_length     = r->read_length;     /* We can only read it once */
new->vlist_validator = r->vlist_validator;
new->proto_output_filters  = r->proto_output_filters;
new->proto_input_filters   = r->proto_input_filters;
new->output_filters  = new->proto_output_filters;
new->input_filters   = new->proto_input_filters;
if (new->main) {
/* Add back the subrequest filter, which we lost when
* we set output_filters to include only the protocol
NULL, new, new->connection);
update_r_in_filters(new->input_filters, r, new);
update_r_in_filters(new->output_filters, r, new);
apr_table_setn(new->subprocess_env, "REDIRECT_STATUS",
apr_itoa(r->pool, r->status));
* XXX: hmm.  This is because mod_setenvif and mod_unique_id really need
* to do their thing on internal redirects as well.  Perhaps this is a
* misnamed function.
if ((access_status = ap_run_post_read_request(new))) {
ap_die(access_status, new);
return NULL;
return new;


June 24th, 2010

Comments Welcome

  • Siasy Collins

    Hi, WHat does one do if one of the world's biggest website hosting companies - does not accept .htaccess files.
    I have recently updated my website, and found that I needed to move all my previous html pages to the new pages - in php.
    Also the new site has many dynamic pages with php? followed by parameters and queries.
    YAhoo uses Apache for its php. But without .htaccess file, is there any other way to 1. 301 redirect the previous pages to the new ones, and 2. mod-rewrite the dynamic URLs to easier to read ones.

    Have you or any one you kow had to work with Yahoo hosted site before????

  • AskApache

    @ Siasy

    Just relentlessly search for a workaround is what I do, unless of course you have the option to move to DreamHost.

    Yahoo is problematic in this area, I've had to work with them in the past for clients who had the cash to pay me to waste my time fixing their stuff.

Popular Articles
My Online Tools

Related Articles
Newest Posts

  • @askapache · Jul 21
    Magic spells for sending thoughts across time? Books
  • @askapache · Jul 20
    TV is just a relic of the previous generation. We just don't know it yet.
  • @askapache · Jul 20
    I will never go back on my ideals, no matter the cost. I'll never let the economic vultures steal my dreams. I'd rather give up the ghost
  • @askapache · Jul 18
    I don't want a better seat, I want control of the engine
  • @askapache · Jul 14
    No matter how good u r, there will always be someone 2x, 5x, 100x better. This is true for me and everyone. No direction but forward.
  • @askapache · Jul 12
    Heads up, I'll DDoS the f out of askapache next week, to see how resilient it really is :) - will try to overflow disk, net, and ip stack
  • RUN GCC! This is a typical shirt I wear, from the  shop. A clerk at the LQ recognized it! 
  • Merlin the Magician 
  • ROGUE CODE - Latest novel from @markrussinovich 
  • RTFM - surprisingly very helpful and way more comprehensive than it looks! @redteamfieldman #pwnAllTheThings 
  • Dear Hacker - Letters to the Editor of 2600, from Emmanuel Goldstein 
  • The Mythical Man-Month - Essays on Software Engineering, by Frederick P. Brooks, Jr. 
  • "where wizards stay up late" - The Origins of the Internet. Favorite book detailing the birth of the net and IMPs 
  • ZERO DAY - read before Trojan horse 
  • Trojan Horse, a novel! 
  • The Hacker Playbook - very nice high level overview of attacks 
  • Clean Code - A Handbook of Agile Software Craftsmanship 
  • Secrets of the JavaScript Ninja - By my absolute favorite JS hacker John Resig! 
  • Hacking Exposed 7: Network Security Secrets & SolutionsMy all time favorite, basic but thorough and accurate. 
  • Empty words will be no surrogate for cold resolve. Pain is nothing. 

Hacking and Hackers

The use of "hacker" to mean "security breaker" is a confusion on the part of the mass media. We hackers refuse to recognize that meaning, and continue using the word to mean someone who loves to program, someone who enjoys playful cleverness, or the combination of the two. See my article, On Hacking.
-- Richard M. Stallman


It's very simple - you read the protocol and write the code. -Bill Joy

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 3.0 License, just credit with a link.
This site is not supported or endorsed by The Apache Software Foundation (ASF). All software and documentation produced by The ASF is licensed. "Apache" is a trademark of The ASF. NCSA HTTPd.
UNIX ® is a registered Trademark of The Open Group. POSIX ® is a registered Trademark of The IEEE.

| Google+ | askapache

Site Map | Contact Webmaster | License and Disclaimer | Terms of Service

↑ TOPMain