archive-fr.com » FR » O » OBSPM.FR

Total: 155

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • PATH_INFO Changes in the CGI Environment - Apache HTTP Server
    variables by looking at the filename not the URL While this resulted in the correct values in many cases when the filesystem path was overloaded to contain path information it could result in errant behavior For example if the following appeared in a config file Alias cgi ralph usr local httpd cgi bin user cgi ralph In this case user cgi is the CGI script the ralph is information to be passed onto the CGI If this configuration was in place and a request came for cgi ralph script the code would set PATH INFO to ralph script and SCRIPT NAME to cgi Obviously the latter is incorrect In certain cases this could even cause the server to crash The Solution Apache 1 2 and later now determine SCRIPT NAME and PATH INFO by looking directly at the URL and determining how much of the URL is client modifiable and setting PATH INFO to it To use the above example PATH INFO would be set to script and SCRIPT NAME to cgi ralph This makes sense and results in no server behavior problems It also permits the script to be guaranteed that http SERVER NAME SERVER PORT SCRIPT NAME PATH INFO will always be an accessible URL that points to the current script something which was not necessarily true with previous versions of Apache However the ralph information from the Alias directive is lost This is unfortunate but we feel that using the filesystem to pass along this sort of information is not a recommended method and a script making use of it deserves not to work Apache 1 2b3 and later however do provide a workaround Compatibility with Previous Servers It may be necessary for a script that was designed for earlier versions of Apache or other servers to

    Original URL path: http://ama09.obspm.fr/manual-2.0/cgi_path.html (2015-11-16)
    Open archived version from archive


  • Relevant Standards - Apache HTTP Server
    Track The Hypertext Transfer Protocol HTTP is an application level protocol for distributed collaborative hypermedia information systems This documents HTTP 1 1 RFC 2396 Standards Track A Uniform Resource Identifier URI is a compact string of characters for identifying an abstract or physical resource HTML Recommendations Regarding the Hypertext Markup Language Apache complies with the following IETF and W3C recommendations RFC 2854 Informational This document summarizes the history of HTML development and defines the text html MIME type by pointing to the relevant W3C recommendations HTML 4 01 Specification Errata This specification defines the HyperText Markup Language HTML the publishing language of the World Wide Web This specification defines HTML 4 01 which is a subversion of HTML 4 HTML 3 2 Reference Specification The HyperText Markup Language HTML is a simple markup language used to create hypertext documents that are portable from one platform to another HTML documents are SGML documents XHTML 1 1 Module based XHTML Errata This Recommendation defines a new XHTML document type that is based upon the module framework and modules defined in Modularization of XHTML XHTML 1 0 The Extensible HyperText Markup Language Second Edition Errata This specification defines the Second Edition of XHTML 1 0 a reformulation of HTML 4 as an XML 1 0 application and three DTDs corresponding to the ones defined by HTML 4 Authentication Concerning the different methods of authentication Apache follows the following IETF recommendations RFC 2617 Draft standard HTTP 1 0 includes the specification for a Basic Access Authentication scheme Language Country Codes The following links document ISO and other language and country code information ISO 639 2 ISO 639 provides two sets of language codes one as a two letter code set 639 1 and another as a three letter code set this part of ISO

    Original URL path: http://ama09.obspm.fr/manual-2.0/misc/relevant_standards.html (2015-11-16)
    Open archived version from archive

  • Terms Used to Describe Modules - Apache HTTP Server
    with status MPM is a Multi Processing Module Unlike the other types of modules Apache must have one and only one MPM in use at any time This type of module is responsible for basic request handling and dispatching Base A module labeled as having Base status is compiled and loaded into the server by default and is therefore normally available unless you have taken steps to remove the module from your configuration Extension A module with Extension status is not normally compiled and loaded into the server To enable the module and its functionality you may need to change the server build configuration files and re compile Apache Experimental Experimental status indicates that the module is available as part of the Apache kit but you are on your own if you try to use it The module is being documented for completeness and is not necessarily supported External Modules which are not included with the base Apache distribution third party modules may use the External status We are not responsible for nor do we support such modules Source File This quite simply lists the name of the source file which contains the code for the module This is also

    Original URL path: http://ama09.obspm.fr/manual-2.0/mod/module-dict.html (2015-11-16)
    Open archived version from archive

  • Apache 1.3 API notes - Apache HTTP Server
    ap sub req lookup uri and ap sub req method uri these construct a new request rec structure and processes it as you would expect up to but not including the point of actually sending a response These functions skip over the access checks if the sub request is for a file in the same directory as the original request Server side includes work by building sub requests and then actually invoking the response handler for them via the function ap run sub req Handling requests declining and returning error codes As discussed above each handler when invoked to handle a particular request rec has to return an int to indicate what happened That can either be OK the request was handled successfully This may or may not terminate the phase DECLINED no erroneous condition exists but the module declines to handle the phase the server tries to find another an HTTP error code which aborts handling of the request Note that if the error code returned is REDIRECT then the module should put a Location in the request s headers out to indicate where the client should be redirected to Special considerations for response handlers Handlers for most phases do their work by simply setting a few fields in the request rec structure or in the case of access checkers simply by returning the correct error code However response handlers have to actually send a request back to the client They should begin by sending an HTTP response header using the function ap send http header You don t have to do anything special to skip sending the header for HTTP 0 9 requests the function figures out on its own that it shouldn t do anything If the request is marked header only that s all they should do they should return after that without attempting any further output Otherwise they should produce a request body which responds to the client as appropriate The primitives for this are ap rputc and ap rprintf for internally generated output and ap send fd to copy the contents of some FILE straight to the client At this point you should more or less understand the following piece of code which is the handler which handles GET requests which have no more specific handler it also shows how conditional GET s can be handled if it s desirable to do so in a particular response handler ap set last modified checks against the If modified since value supplied by the client if any and returns an appropriate code which will if nonzero be USE LOCAL COPY No similar considerations apply for ap set content length but it returns an error code for symmetry int default handler request rec r int errstatus FILE f if r method number M GET return DECLINED if r finfo st mode 0 return NOT FOUND if errstatus ap set content length r r finfo st size errstatus ap set last modified r r finfo st mtime return errstatus f fopen r filename r if f NULL log reason file permissions deny server access r filename r return FORBIDDEN register timeout send r ap send http header r if r header only send fd f r ap pfclose r pool f return OK Finally if all of this is too much of a challenge there are a few ways out of it First off as shown above a response handler which has not yet produced any output can simply return an error code in which case the server will automatically produce an error response Secondly it can punt to some other handler by invoking ap internal redirect which is how the internal redirection machinery discussed above is invoked A response handler which has internally redirected should always return OK Invoking ap internal redirect from handlers which are not response handlers will lead to serious confusion Special considerations for authentication handlers Stuff that should be discussed here in detail Authentication phase handlers not invoked unless auth is configured for the directory Common auth configuration stored in the core per dir configuration it has accessors ap auth type ap auth name and ap requires Common routines to handle the protocol end of things at least for HTTP basic authentication ap get basic auth pw which sets the connection user structure field automatically and ap note basic auth failure which arranges for the proper WWW Authenticate header to be sent back Special considerations for logging handlers When a request has internally redirected there is the question of what to log Apache handles this by bundling the entire chain of redirects into a list of request rec structures which are threaded through the r prev and r next pointers The request rec which is passed to the logging handlers in such cases is the one which was originally built for the initial request from the client note that the bytes sent field will only be correct in the last request in the chain the one for which a response was actually sent Resource allocation and resource pools One of the problems of writing and designing a server pool server is that of preventing leakage that is allocating resources memory open files etc without subsequently releasing them The resource pool machinery is designed to make it easy to prevent this from happening by allowing resource to be allocated in such a way that they are automatically released when the server is done with them The way this works is as follows the memory which is allocated file opened etc to deal with a particular request are tied to a resource pool which is allocated for the request The pool is a data structure which itself tracks the resources in question When the request has been processed the pool is cleared At that point all the memory associated with it is released for reuse all files associated with it are closed and any other clean up functions which are associated with the pool are run When this is over we can be confident that all the resource tied to the pool have been released and that none of them have leaked Server restarts and allocation of memory and resources for per server configuration are handled in a similar way There is a configuration pool which keeps track of resources which were allocated while reading the server configuration files and handling the commands therein for instance the memory that was allocated for per server module configuration log files and other files that were opened and so forth When the server restarts and has to reread the configuration files the configuration pool is cleared and so the memory and file descriptors which were taken up by reading them the last time are made available for reuse It should be noted that use of the pool machinery isn t generally obligatory except for situations like logging handlers where you really need to register cleanups to make sure that the log file gets closed when the server restarts this is most easily done by using the function ap pfopen which also arranges for the underlying file descriptor to be closed before any child processes such as for CGI scripts are exec ed or in case you are using the timeout machinery which isn t yet even documented here However there are two benefits to using it resources allocated to a pool never leak even if you allocate a scratch string and just forget about it also for memory allocation ap palloc is generally faster than malloc We begin here by describing how memory is allocated to pools and then discuss how other resources are tracked by the resource pool machinery Allocation of memory in pools Memory is allocated to pools by calling the function ap palloc which takes two arguments one being a pointer to a resource pool structure and the other being the amount of memory to allocate in char s Within handlers for handling requests the most common way of getting a resource pool structure is by looking at the pool slot of the relevant request rec hence the repeated appearance of the following idiom in module code int my handler request rec r struct my structure foo foo foo ap palloc r pool sizeof my structure Note that there is no ap pfree ap palloc ed memory is freed only when the associated resource pool is cleared This means that ap palloc does not have to do as much accounting as malloc all it does in the typical case is to round up the size bump a pointer and do a range check It also raises the possibility that heavy use of ap palloc could cause a server process to grow excessively large There are two ways to deal with this which are dealt with below briefly you can use malloc and try to be sure that all of the memory gets explicitly free d or you can allocate a sub pool of the main pool allocate your memory in the sub pool and clear it out periodically The latter technique is discussed in the section on sub pools below and is used in the directory indexing code in order to avoid excessive storage allocation when listing directories with thousands of files Allocating initialized memory There are functions which allocate initialized memory and are frequently useful The function ap pcalloc has the same interface as ap palloc but clears out the memory it allocates before it returns it The function ap pstrdup takes a resource pool and a char as arguments and allocates memory for a copy of the string the pointer points to returning a pointer to the copy Finally ap pstrcat is a varargs style function which takes a pointer to a resource pool and at least two char arguments the last of which must be NULL It allocates enough memory to fit copies of each of the strings as a unit for instance ap pstrcat r pool foo bar NULL returns a pointer to 8 bytes worth of memory initialized to foo bar Commonly used pools in the Apache Web server A pool is really defined by its lifetime more than anything else There are some static pools in http main which are passed to various non http main functions as arguments at opportune times Here they are permanent pool never passed to anything else this is the ancestor of all pools pconf subpool of permanent pool created at the beginning of a config cycle exists until the server is terminated or restarts passed to all config time routines either via cmd pool or as the pool p argument on those which don t take pools passed to the module init functions ptemp sorry I lie this pool isn t called this currently in 1 3 I renamed it this in my pthreads development I m referring to the use of ptrans in the parent contrast this with the later definition of ptrans in the child subpool of permanent pool created at the beginning of a config cycle exists until the end of config parsing passed to config time routines via cmd temp pool Somewhat of a bastard child because it isn t available everywhere Used for temporary scratch space which may be needed by some config routines but which is deleted at the end of config pchild subpool of permanent pool created when a child is spawned or a thread is created lives until that child thread is destroyed passed to the module child init functions destruction happens right after the child exit functions are called which may explain why I think child exit is redundant and unneeded ptrans should be a subpool of pchild but currently is a subpool of permanent pool see above cleared by the child before going into the accept loop to receive a connection used as connection pool r pool for the main request this is a subpool of connection pool for subrequests it is a subpool of the parent request s pool exists until the end of the request i e ap destroy sub req or in child main after process request has finished note that r itself is allocated from r pool i e r pool is first created and then r is the first thing palloc d from it For almost everything folks do r pool is the pool to use But you can see how other lifetimes such as pchild are useful to some modules such as modules that need to open a database connection once per child and wish to clean it up when the child dies You can also see how some bugs have manifested themself such as setting connection user to a value from r pool in this case connection exists for the lifetime of ptrans which is longer than r pool especially if r pool is a subrequest So the correct thing to do is to allocate from connection pool And there was another interesting bug in mod include mod cgi You ll see in those that they do this test to decide if they should use r pool or r main pool In this case the resource that they are registering for cleanup is a child process If it were registered in r pool then the code would wait for the child when the subrequest finishes With mod include this could be any old include and the delay can be up to 3 seconds and happened quite frequently Instead the subprocess is registered in r main pool which causes it to be cleaned up when the entire request is done i e after the output has been sent to the client and logging has happened Tracking open files etc As indicated above resource pools are also used to track other sorts of resources besides memory The most common are open files The routine which is typically used for this is ap pfopen which takes a resource pool and two strings as arguments the strings are the same as the typical arguments to fopen e g FILE f ap pfopen r pool r filename r if f NULL else There is also a ap popenf routine which parallels the lower level open system call Both of these routines arrange for the file to be closed when the resource pool in question is cleared Unlike the case for memory there are functions to close files allocated with ap pfopen and ap popenf namely ap pfclose and ap pclosef This is because on many systems the number of files which a single process can have open is quite limited It is important to use these functions to close files allocated with ap pfopen and ap popenf since to do otherwise could cause fatal errors on systems such as Linux which react badly if the same FILE is closed more than once Using the close functions is not mandatory since the file will eventually be closed regardless but you should consider it in cases where your module is opening or could open a lot of files Other sorts of resources cleanup functions More text goes here Describe the the cleanup primitives in terms of which the file stuff is implemented also spawn process Pool cleanups live until clear pool is called clear pool a recursively calls destroy pool on all subpools of a then calls all the cleanups for a then releases all the memory for a destroy pool a calls clear pool a and then releases the pool structure itself i e clear pool a doesn t delete a it just frees up all the resources and you can start using it again immediately Fine control creating and dealing with sub pools with a note on sub requests On rare occasions too free use of ap palloc and the associated primitives may result in undesirably profligate resource allocation You can deal with such a case by creating a sub pool allocating within the sub pool rather than the main pool and clearing or destroying the sub pool which releases the resources which were associated with it This really is a rare situation the only case in which it comes up in the standard module set is in case of listing directories and then only with very large directories Unnecessary use of the primitives discussed here can hair up your code quite a bit with very little gain The primitive for creating a sub pool is ap make sub pool which takes another pool the parent pool as an argument When the main pool is cleared the sub pool will be destroyed The sub pool may also be cleared or destroyed at any time by calling the functions ap clear pool and ap destroy pool respectively The difference is that ap clear pool frees resources associated with the pool while ap destroy pool also deallocates the pool itself In the former case you can allocate new resources within the pool and clear it again and so forth in the latter case it is simply gone One final note sub requests have their own resource pools which are sub pools of the resource pool for the main request The polite way to reclaim the resources associated with a sub request which you have allocated using the ap sub req functions is ap destroy sub req which frees the resource pool Before calling this function be sure to copy anything that you care about which might be allocated in the sub request s resource pool into someplace a little less volatile for instance the filename in its request rec structure Again under most circumstances you shouldn t feel obliged to call this function only 2K of memory or so are allocated for a typical sub request and it will be freed anyway when the main request pool is cleared It is only when you are allocating many many sub requests for a single main request that you should seriously consider the ap destroy

    Original URL path: http://ama09.obspm.fr/manual-2.0/developer/API.html (2015-11-16)
    Open archived version from archive

  • Debugging Memory Allocation in APR - Apache HTTP Server
    help detect memory problems Note that if you re using efence then you should also add in ALLOC DEBUG But don t add in ALLOC DEBUG if you re using Purify because ALLOC DEBUG would hide all the uninitialized read errors that Purify can diagnose Pool Debugging POOL DEBUG This is intended to detect cases where the wrong pool is used when assigning data to an object in another pool In particular it causes the table set add merge n routines to check that their arguments are safe for the apr table t they re being placed in It currently only works with the unix multiprocess model but could be extended to others Table Debugging MAKE TABLE PROFILE Provide diagnostic information about make table calls which are possibly too small This requires a recent gcc which supports builtin return address The error log output will be a message such as table push apr table t created by 0x804d874 hit limit of 10 Use l 0x804d874 to find the source that corresponds to It indicates that a apr table t allocated by a call at that address has possibly too small an initial apr table t size guess Allocation Statistics ALLOC STATS Provide some statistics on the cost of allocations This requires a bit of an understanding of how alloc c works Allowable Combinations Not all the options outlined above can be activated at the same time the following table gives more information ALLOC DEBUG ALLOC USE MALLOC POOL DEBUG MAKE TABLE PROFILE ALLOC STATS ALLOC DEBUG No Yes Yes Yes ALLOC USE MALLOC No No No No POOL DEBUG Yes No Yes Yes MAKE TABLE PROFILE Yes No Yes Yes ALLOC STATS Yes No Yes Yes Additionally the debugging options are not suitable for multi threaded versions of the server

    Original URL path: http://ama09.obspm.fr/manual-2.0/developer/debugging.html (2015-11-16)
    Open archived version from archive

  • Documenting Apache 2.0 - Apache HTTP Server
    The deffunc is not always necessary DoxyGen does not have a full parser in it so any prototype that use a macro in the return type declaration is too complex for scandoc Those functions require a deffunc An example using gt rather than return the final element of the pathname param pathname The path to get the final element of return the final element of the path tip Examples pre

    Original URL path: http://ama09.obspm.fr/manual-2.0/developer/documenting.html (2015-11-16)
    Open archived version from archive

  • Apache 2.0 Hook Functions - Apache HTTP Server
    this macro expands to something like this void ap run do something request rec r int n do something r n Hooks that return a value If the hook returns a value then it can either be run until the first hook that does something interesting like so AP IMPLEMENT HOOK RUN FIRST int do something request rec r int n r n DECLINED The first hook that does not return DECLINED stops the loop and its return value is returned from the hook caller Note that DECLINED is the tradition Apache hook return meaning I didn t do anything but it can be whatever suits you Alternatively all hooks can be run until an error occurs This boils down to permitting two return values one of which means I did something and it was OK and the other meaning I did nothing The first function that returns a value other than one of those two stops the loop and its return is the return value Declare these like so AP IMPLEMENT HOOK RUN ALL int do something request rec r int n r n OK DECLINED Again OK and DECLINED are the traditional values You can use what you want Call the hook callers At appropriate moments in the code call the hook caller like so int n ret request rec r ret ap run do something r n Hooking the hook A module that wants a hook to be called needs to do two things Implement the hook function Include the appropriate header and define a static function of the correct type static int my something doer request rec r int n return OK Add a hook registering function During initialisation Apache will call each modules hook registering function which is included in the module structure static void

    Original URL path: http://ama09.obspm.fr/manual-2.0/developer/hooks.html (2015-11-16)
    Open archived version from archive

  • Converting Modules from Apache 1.3 to Apache 2.0 - Apache HTTP Server
    The messier changes Register Hooks The new architecture uses a series of hooks to provide for calling your functions These you ll need to add to your module by way of a new function static void register hooks void The function is really reasonably straightforward once you understand what needs to be done Each function that needs calling at some stage in the processing of a request needs to be registered handlers do not There are a number of phases where functions can be added and for each you can specify with a high degree of control the relative order that the function will be called in This is the code that was added to mod mmap static static void register hooks void static const char const aszPre http core c NULL ap hook post config mmap post config NULL NULL HOOK MIDDLE ap hook translate name mmap static xlat aszPre NULL HOOK LAST This registers 2 functions that need to be called one in the post config stage virtually every module will need this one and one for the translate name phase note that while there are different function names the format of each is identical So what is the format ap hook phase name function name predecessors successors position There are 3 hook positions defined HOOK FIRST HOOK MIDDLE HOOK LAST To define the position you use the position and then modify it with the predecessors and successors Each of the modifiers can be a list of functions that should be called either before the function is run predecessors or after the function has run successors In the mod mmap static case I didn t care about the post config stage but the mmap static xlat must be called after the core module had done it s name translation hence the use of the aszPre to define a modifier to the position HOOK LAST Module Definition There are now a lot fewer stages to worry about when creating your module definition The old defintion looked like module MODULE VAR EXPORT module name module STANDARD MODULE STUFF initializer dir config creater dir merger default is to override server config merge server config command handlers handlers filename translation check user id check auth check access type checker fixups logger header parser child init child exit post read request The new structure is a great deal simpler module MODULE VAR EXPORT module name module STANDARD20 MODULE STUFF create per directory config structures merge per directory config structures create per server config structures merge per server config structures command handlers handlers register hooks Some of these read directly across some don t I ll try to summarise what should be done below The stages that read directly across dir config creater create per directory config structures server config create per server config structures dir merger merge per directory config structures merge server config merge per server config structures command table command apr table t handlers handlers The remainder of the old functions

    Original URL path: http://ama09.obspm.fr/manual-2.0/developer/modules.html (2015-11-16)
    Open archived version from archive



  •