Next: Writing an Application
Up: Technical Aspect
Previous: Server
Contents
A server maintains multiple thread pools to handle requests. There
are three thread pools to start with. All the pools are pre-allocated
before server starts working.
Threads from this pool are used to deliver the actual dynamic content.
Each thread uses a separate instance of application object to serve
the request.
In some tests it has been found that this serve architecture can deliver
static content with higher throughput even under extreme loads. The
throughput is almost same as what apache can deliver for static contents.
However apache tends to saturate and/or run out of open file descriptors
under extreme load whereas multithreaded non-blocking model adapted
by OAS allows it to serve request with reasonable latency and sustained
throughput even under extreme load.
This is possible because the request handler loaded the static content
only once in memory and all the threads sent back the same cached
copy. This does not take care of reflecting changes on disk but still
this is useful.
In order to serve requests faster, it maintains an IO thread pool.
Threads in this pool are used to push contents to client. A worker
thread can deposit prepared buffer with an IO pool and move to next
request.
Worker thread pool is roughly more than ten times bigger than other
two. In its default configuration, there are 40 worker threads and
2 threads each in other two pools.
A thread pool is a separate abstract entity and even a request handler
can instantiate a thread pool if required. With ability to resize
the thread pool dynamically, it can be the best self-tuning resource
control.
Next: Writing an Application
Up: Technical Aspect
Previous: Server
Contents
Hrishikesh Mehendale
2003-04-03