当前位置: 代码迷 >> Web前端 >> Bamboo Web开发框架设计及使用仿单
  详细解决方案

Bamboo Web开发框架设计及使用仿单

热度:481   发布时间:2012-06-29 15:48:47.0
Bamboo Web开发框架设计及使用说明书

目录
?

??? 目录
??? 简介
??? 文件目录
??? 命令行工具
??? 命名约定
??? 请求对象结构
??? Bamboo应用目录结构
??? URL路由
??? 处理请求
??? 处理表单
??? 处理上传
??? 模板渲染
??? 模型定义
??? 调试
??? 一些全局辅助函数和对象
??? 库
??? 资源文件的引用
??? 安装
??? 服务配置及启动
??????? >> 启动redis-server


?

简介

  Bamboo Web开发框架是我公司倾两年之力精心打造的一个Web开发框架,用于加速网站及网络应用的开发。本系统针对现代互联网Web2.0特点,在传统Web框架的基础上加入了大量新的特性。它具有如下特点:

??? 速度快。得益于Lua及luajit语言的速度优势;

??? 高度可扩展性。能方便地进行功能扩展;

??? 全异步请求处理。解决了同步模型的各种问题,使服务器性能更高;

??? 支持高并发同时在线。在网页上实现传统网络编程的功能;

??? 支持NoSQL数据库。使得数据组织更加灵活;

??? 插件机制。使得Web开发功能的隔离度更高,复用更方便;

??? 自动负载均衡。在使用服务器集群的情况下,能实现自动负载均衡。


本系统的总体结构图如下:


文件目录
整套Bamboo框架由众多文件和目录组成。它们的树形结构图如下。

bamboo
├── bin
│?? ├── bamboo
│?? └── bamboo_handler
├── cmd_tmpls
│?? ├── createapp
│?? │?? ├── app
│?? │?? │?? ├── handler_entry.lua
│?? │?? │?? └── settings.lua
│?? │?? ├── initial
│?? │?? ├── plugins
│?? │?? └── views
│?? │?????? └── index.html
│?? ├── createmodel
│?? │?? └── newmodel.lua
│?? ├── createplugin
│?? │?? ├── init.lua
│?? │?? └── testplugin.html
│?? ├── media
│?? │?? ├── css
│?? │?? ├── images
│?? │?? ├── js
│?? │?? ├── plugins
│?? │?? └── uploads
│?? └── pluginmedia
│?????? ├── css
│?????? ├── images
│?????? ├── js
│?????? └── uploads
├── errors.lua
├── form.lua
├── init.lua
├── m2.lua
├── model.lua
├── README
├── redis.lua
├── session.lua
├── testing.lua
├── upload.lua
├── user.lua
├── view.lua
├── node.lua
├── message.lua
├── menu.lua
└── web.lua


下面分别说明各个文件负责的功能作用:

文件名
?? ?

功能说明
init.lua?? ?系统入口,在此定义了插件注册、模型注册的底层设施
web.lua?? ?连接对象,负责请求的接收和响应的发送等操作
session.lua?? ?会话生成,管理
form.lua?? ?提交数据的解析(包括url query, form)
model.lua?? ?模型基础,数据库封装层,负责与数据库打交道,且定义模型操作的基本方法
view.lua?? ?模板渲染引擎。提供了简洁而强大的模板渲染功能;
m2.lua?? ?负责与Mongrel2 Web服务器交互;
errors.lua?? ?错误处理模块,负责报错等;
redis.lua?? ?到底层redis的连接封装层
testing.lua?? ?自动化测试工具,能够编程实现测试任务
node.lua?? ?提供树形结构的基本类,后面只要是树形数据结构的,都可以从此类中继承
message.lua?? ?消息体基本定义,后面的评论功能直接使用它
menu.lua?? ?菜单类,继承自Node,用于生成及管理菜单
user.lua?? ?提供用户注册、登录功能基础API
upload.lua?? ?提供上传文件保存功能基本API
bin/bamboo?? ?bamboo的启动工具
bin/bamboo_handler?? ?bamboo应用的加载器
cmd_tmpls/createapp?? ?createapp命令的模板目录
cmd_tmpls/createmodel?? ?createmodel命令的模板目录
cmd_tmpls/createplugin?? ?createplugin命令的模板目录
cmd_tmpls/media?? ?createapp命令的资源目录结构
cmd_tmpls/pluginmedia?? ?createplugin命令的资源目录结构


命令行工具

命令
?? ?

说明
createapp?? ?命令完整形式:

bamboo createapp? appname

作用:

创建一个新的bamboo应用基本目录及文档。

本命令会在当前目录下生成一个bamboo app所应该有的目录结构及必要的文档。
注意:

本命令可以在任意一个目录下运行。
createplugin?? ?命令完整形式:

bamboo createplugin plugin_name

作用:

创建一个新的bamboo插件。

本命令会在当前目录的plugins下面生成一个bamboo plugin所应该有的目录结构及必要的文档。
注意:

本命令应该在plugins目录下运行。
createmodel?? ?命令完整形式:

bamboo createmodel model_name

作用:

创建一个新的bamboo模型模板文件。

本命令会在当前目录下生成一个bamboo model 的模板文件。
注意:

模型名首字母大写 ;

本命令应该在models目录下运行。
initdb?? ?命令完整形式:

bamboo initdb initial_data_file

作用:

将初始化数据文件中的数据一次性灌到数据库中去。
注意:

本命令可以在任意一个目录下运行,必须指定可到达的文件名。
test?? ?命令完整形式:

bamboo test? td_instance

作用:

运行一个测试实例。

注意:

本命令可以在任意一个目录下运行
start?? ?命令完整形式:

bamboo start

作用:

启动bamboo app应用进程。运行网站即用此命令。

注意:

本命令要求在app根目录下运行
help?? ?命令完整形式:

bamboo help

作用:

显示bamboo下支持的所有命令名

注意:

本命令可以在任意一个目录下运行
……?? ?命令还在不断添加中……


命名约定

??? 文件名全小写;
??? 类名首字母大写,其余小写;
??? 模块名小写;
??? 函数定义首单词小写,后面单词首字母大写,单词之间紧密结合不加额外字符;
??? 变量定义单词全小写,单词之间用_隔开,这主要是为了与函数名区分开来;




请求对象结构
Request进入Bamboo后,传到应用中的对象名为req,它可以获得的属性有:

req 属性
?? ?

说明
conn_id ?? ?连接唯一编码,如:5
path ?? ?访问路径,如:/arc
session_id?? ?session_id编码,如:APP-f4a619a2f181ccccd4812e9f664e9029
data ?? ?里面放置一些数据(可能是mongrel2在request基础之上添加的一些东西)
body ?? ?当请求为POST时,这个里面放置上传数据
METHOD ?? ?请求方法,如:GET,POST
sender ?? ?zmq的通道靠近mongrel2的那一端(也即客户那一端)的唯一编码,如:e884a439-31be-4f74-8050-a93565795b25
session ?? ?放置会话相关数据
headers ?? ?消息头。里面有更多数据,要用的话,可以继续参考里面的
ajax?? ?如果此请求为ajax请求,则此属性会出现
user?? ?(可能出现)如果用户已经登录,就会有此对象


Request的headers中的属性有

req.headers 属性
?? ?

说明
connection?? ?浏览器连接属性。如:keep-alive
x-forwarded-for?? ?127.0.0.1
host?? ?localhost:6767
accept-charset?? ?GB2312,utf-8;q=0.7,*;q=0.7
VERSION?? ?HTTP/1.1
METHOD?? ?请求方法,如:GET
referer?? ?http://localhost:6767/arc
accept?? ?text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
PATH?? ?请求路径。如:/arc
cookie?? ?cookie的内容,如:session="APP-f4a619a2f181ccccd4812e9f664e9029"
cache-control?? ?max-age=0
PATTERN?? ?/arc
accept-language?? ?zh-cn,zh;q=0.5
user-agent?? ?用户使用的浏览器类型。Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.8)
URI?? ?用户请求的资源定位符,含QUERY参数。如:/arc?foo=bar
keep-alive?? ?115
accept-encoding?? ?gzip,deflate
QUERY?? ?query字符串。如:foo=bar
?? ?
?? ?
?? ?



Bamboo应用目录结构
Bamboo应用的目录结构如下:

BAMBOO APP
├── app
│?? ├── handler_entry.lua
│?? └── settings.lua
├── initial
├── plugins
└── views

本目录结构可由bamboo createapp命令自动创建。各个目录及文件的作用如下:

目录及文件名
?? ?

说明
app?? ?应用的控制代码目录
app/handler_entry.lua?? ?应用的启动入口文件
app/settings.lua?? ?应用的配置
initial?? ?应用所需的初始数据文件放在这里
plugins?? ?应用的插件(各插件与它们的资源自成一体)
views?? ?应用的视图模板文件存放的目录



下面重点看一下handler_entry.lua中主要的代码结构 ,下面是一个具体的例子:
require 'bamboo'

local http = require 'lglib.http'

local Form = require 'bamboo.form'
local View = require 'bamboo.view'
local User = require 'bamboo.user'
local Upload = require 'bamboo.upload'

local Page = require 'legecms.models.page'

local registerPlugin = bamboo.registerPlugin
local registerModel = bamboo.registerModel

------------------------------------
-- 模型注册,非常重要
registerModel('Page', require 'legecms.models.page')
registerModel('User', require 'bamboo.user')
registerModel('Message', require 'legecms.models.message')
registerModel('Upload', require 'bamboo.upload')
------------------------------------

------------------------------------
-- 引用注册各插件函数
registerPlugin('testplugin', require('plugins.testplugin'))
registerPlugin('articlelist', require('plugins.articlelist'))
registerPlugin('articleview', require('plugins.articleview'))
registerPlugin('comment', require('plugins.comment'))
registerPlugin('menu', require('plugins.menu'))
------------------------------------

local function index(web, req)
??? ptable(req.headers)
??? web:page(View("index.html"){ })
end

local node = require 'legecms.node'
local nodesNewLeaf = node.newLeaf
local nodesNewCate = node.newCate


-- 这里必须写成全局变量
URLS = { '/',
??? ['/'] = index,
? ?
??? ['nodes/newleaf/'] = nodesNewLeaf,
??? ['nodes/newcate/'] = nodesNewCate,
}


从上面这个例子中可以看到,handler_entry.lua主要由以下几部分组成:

??? 引入依赖头部;
??? 注册要用到的模型(如果有的话);
??? 注册要用到的插件(如果有的话);
??? 定义处理函数或从外部引入处理函数;
??? 写URLS路由配置(URLS变量必须写成全局变量)。


URL路由

从上面的代码可以看出,URL路由表结构是如下样子:
URLS = { '/',
??? ['/'] = index,
? ?
??? ['nodes/newleaf/'] = nodesNewLeaf,
??? ['nodes/newcate/'] = nodesNewCate,
}


URLS本身是一个lua table,除了满足table的格式写法外,还要注意以下规则:

??? 第一个元素,是一个列表元素,类型为字符串,即URLS[1]值为一个字符串,上面的这个值为 ’/’。这个值实际上是与mongrel2的配置相关的。一般来讲,对初期应用,这个值一般设为 ‘/’;
??? 其它的,就是lua table的键值对了。左边是一个字符串key,这个key就是url的值或模式,右边是对应的处理函数;
??? key的写法有一定要求,建议写成
??????? 首字符不带 ’/’。比如:请求的URI是 /page/123/,那么key写成 ‘page/123/’;
??????? 末字符始终为 ‘/’。比如:请求的URI是 /page/123,key也写成 ‘page/123/’。更好的建议是在客户端发请求的时候,就确保URL的最后一个字符为 ‘/’;
??? key中可以包含正则表达式,用以匹配满足同一个模式的一批URL请求。此处的正则表达式为lua中的正则表达式,使用更方便。



处理请求
URL处理函数就是lua中的普通函数。不过,此处理函数的参数传递的结构是固定的:

??? 第一个参数必须为 web。它是此连接的一个对象;
??? 第二个参数必须为 req。它是此次请求的请求对象,请求的信息都包含在这里面了;



URL处理函数看起来是这样的:
local function index(web, req)
??? ptable(req.headers)
??? web:page(View("index.html"){ })
end



实际上,为了使编程更方便,写起来更简洁,bamboo已经把web和req做成了全局变量,即使不传递这两个参数,编程也是不会出问题的。

响应返回一般用到下面两个函数:

web:page()

返回一个页面,里面的参数为页面字符串。

web:json()
??? 返回一个json结构,里面的参数为lua table。
处理表单
表单的处理在form.lua中实现。引入方式:

local Form = require ‘bamboo.form’

常用的API

??? Form:parse(req)


local params = Form:parse(req)

将请求参数解析成lua的table表,放到params中。

使用此函数,可以完成:

??????? 如果请求用的是GET方法,则可以解析URL中的query中所带的参数;
??????? 如果请求用的是POST方法,可以解析以URL encoded方式编码的参数(普通表单提交);
??????? 如果请求用的是POST方法,可以解析html4文件上传。


??? Form:parseQuery(req)


local params = Form:parseQuery(req)

将请求的URL query中的参数解析成lua的table表,放到params中。

此函数是为了弥补Form:parse()的不足而设计。当请求使用POST方法提交数据时,附着在URL query中的参数将无法被解析。这个时候,就可以显式地调用Form:parseQuery()方法来获取其中的参数信息。

小技巧:有些时候,传上来的表单信息不需要做额外处理,就可以直接传到View中做渲染,中间可以省很多代码。

处理上传
上传功能在upload.lua中实现。引用方式为:

local Upload= require ‘bamboo.upload’

上传功能依赖于Form类。

Html4, html5 的单文件及多文件上传 都可以使用Upload:process(web, req)处理,不再需要显式地调用Form:parse(req)处理(对于html5上传的情况,使用Form:parse()处理会出错)。

典型的用法为:

local file, return_type = Upload:process(web, req)

默认会将上传的文件存放在media/uploads/目录下面。

返回值说明:

??????? file为生成的Upload对应实例或实例列表;
??????? return_type有两个值:’single’ or ‘multiple’,表示这次处理的是单个文件还是多个文件。



完整的参数形式为:

function process(self, web, req, dest_dir, prefix, postfix)

参数说明:

??? dest_dir??? 相对于media/uploads/的目标目录,即可以建立自己的子目录

??? prefix??????? 保存文件的时候用的前缀

??? postfix??????? 保存文件的时候用的后缀


经过这个函数处理后,文件会保存在磁盘上,同时此文件的一个对应Upload模型实例(含有效检索路径)会保存到数据库中。

模板渲染
Bamboo拥有强大而简洁高效的模板渲染引擎。

所有的页面模板都由下面这7个模板标记组合使用而成,已经基本完备。

??? {{? }}???????? 引用lua变量;
??? {%? %}???????? 使用lua语句;
??? {(? )}???????? 模板中嵌入其它页面(片断);
??? {<? >}???????? 对里面的内容进行html转义(安全性);
??? {:? :}???????? 继承基模板;
??? {[? ]}???????? 模板继承过程中用于标示被替换的块;
??? {^? ^}???????? 调用插件,并给插件传入参数。



模板中可以方便地加入lua语句和函数,这样的话已经具备的完整的逻辑描述能力和计算能力。不像Django那样是单独的一套模板语言,应用受到很多限制。

模板渲染在view.lua中实现。功能的引入:

local View = require ‘bamboo.view’

语法格式:
??? View( ‘模板文件名’ ) { 参数列表? }

说明:

??? 模板文件名可以包含路径,路径之间用 ’/’ 号隔开;
??? 参数列表指要传到模板中去渲染的变量的列表,要求付合lua table的语法。如:View(‘index.html’){ page = page, news = news, egg = egg };
??? 给定模板文件名后,bamboo会依次试着去如下地方找模板文件,(找不到的话就会报错):
??????? 应用目录/USERDEFINED_VIEWS/
??????? 应用目录/views/
??????? 应用目录/plugins/

上面的USERDEFINED_VIEWS是可以在settings.lua中配置的变量,默认为./views/;

??? 此模板系统实现得比较严格,如果在模板中写了未传入的变量名,则会报错。




模型定义
用户可以自己定义模型,模型是采用继承的方式定义的。

模型的定义基本格式如下(把$MODEL替换成具体的模型名):
module(..., package.seeall)

local Model = require 'bamboo.model'

local $MODEL = Model:extend {
?? __tag = 'Bamboo.Model.$MODEL';
?? __name = '$MODEL';
?? __desc = 'Generitic $MODEL definition';
?? __fields = {
???????? ['name']? =? {? freshfield=true},

? …...
?? };
? ?
?? init = function (self, t)
?????? if not t then return self end
?????? self.name = t.name or self.name
?????? …...
?????? return self
?? end;

}

return $MODEL


注意,可以从任意模型继承,而不一定非要是Model。


模型实例的生成:

instance = model_object(t)
模型实例的保存:
??? instance:save()
注:使用save()保存的时候,只保存非外键域(即域定义里面没有foreign属性的域)。

域描述表的属性内容:

属性key
?? ?

内容及说明
freshfield?? ?true。表明此字段是本模型中新定义的(相对于从父模型中继承过来的字段)
foreign?? ?模型的名称,或 UNFIXED,或是ANYSTRING。指明此域是一个外键域,外链到哪一种模型,当为 ’UNFIXED’ 时,外链到的模型是可以变化的;当为’ANYSTRING’时,可以在外键中以任意字符串作条目。
st?? ?‘ONE’ or ‘MANY’ or ‘FIFO’。与foreign配合使用,声明外链的类型,是单外键还是多外键
?? ?
required=true?? ?表明此域是必须的,用于上传表单的模型验证中(Model:validate())
......?? ?(还在不断添加中)
?? ?


模型外键的完整解决方案

外键分为一对一,一对多,多对多三种,同时又有正向、反向两个方向。

??? 一对一关系,正向反向都是单个指向。
??? 一对多关系,正向是多个指向,反向是单个指向。
??? 多对多关系,正向是多个指向,反向是单个指向。



在Bamboo的模型实现中,对上面情况的处理方式是在域描述表里面写属性:

??? 一对一正向:foreign=’model B’, st=’ONE’
??? 一对一反向:foreign=’model A’, st=’ONE’
??? 一对多正向:foreign=’model B’, st=’MANY’
??? 一对多反向:foreign=’model A’, st=’ONE’
??? 多对多正向:foreign=’model B’, st=’MANY’
??? 多对多反向:foreign=’model A’, st=’MANY’



在域描述表里面写明后,还得需要相应的方法来操作它们,Bamboo提供了一套完整的API:

??? instance:addForeign(field_name, new_obj)? 向本对象的某一个外键域中添加新对象的连接(实际就是存储这个新对象的id值)。可以看出,要先获得new_obj才行;
??? instance:getForeign(field_name [, start [, stop]])? 获得本对象的外键对象。对于单外键的,返回的是那个外键对象,对于多外键的,返回的是那些外键对象的列表。可以用start和stop参数来决定获取多外键的某一个切片。
??? instance:delForeign(field_name, fr_obj)? 删除一个外键。可以看出,要先获得待删除的外键对象才行。



在外键的定义的时候,一个完整的外键关系实际是需要正反双方共同完成,即正向反向都要定义。但正向反向都定义的话,会导致增加域。所以,很多时候,只用得到一半关系,有可能是正向的一半,也有可能是反向的一半。由于有一半关系的存在,因此,当删除一个模型的对象的时候,引用这个外键对象的另一个模型的那个对象,可能根本无法知道。于是自动化的删除管理就是一件非常困难的事情,这也是数据库系统中历来最头痛的一件事情。

鉴于这种困难,Bamboo把外键删除的工作交由用户自己来管理。用户根据自己的业务逻辑小心地处理一个对象删除后,其它相关引用对象外键去留的问题。不过,Bamboo还是做了一小点事情来帮助用户稍微释放一下紧绷的神经,当使用getForeign()函数的时候,如果发现,要获取的外键对象不存在时,会自己将此条外键记录清除掉。这个方式仍然有局限,那就是,只有当调用getForeign()函数时,这种清除工作才会发生。想要像垃圾回收器那样自动进行外键回收清理,目前是不可能的。

调试
目前的调试主要采用根据出错提示信息。使用 print(), ptable() 两个函数来调试。


一些全局辅助函数和对象

名称
?? ?

说明
Object?? ?bamboo的oop的始祖对象。所有的类都从此对象中继承而来
I_AM_CLASS(self)?? ?类方法声明函数。在类函数的定义开头写上这一句,防止类实例调用这个类方法
I_AM_INSTANCE(self)?? ?实例方法声明函数。在实例函数的定义开头写上这一句,防止类对象调用这个实例方法
T(tbl)?? ?创建一个table,使这个table能像tblj:insert(‘x’)这样使用
toString(obj)?? ?将对象转化成字符串
ptable(tbl)?? ?打印一层table信息
po(obj)?? ?打印对象
checkType(....)?? ?检查参数类型
checkRange(.....)?? ?检查参数范围
isFalse(t)?? ?判断传入的参数是否为假(数字0,空字符串,false, nil, 空表{}都被此函数视为假)
isEmpty(obj)?? ?判断传入的对象是否为空(bamboo的oop中认为,只要对象表中包含除_parent, new function, extend function之外的非false字段就为非空)
setProto(...)?? ?设置lua中一个table的原型。
seri(obj)?? ?序列化一个对象(任意lua类型 -> 字符串)
unseri(str)?? ?反序列化一个字符串(字符串 -> 某种lua类型)
?? ?






资源文件的引用
所有的资源文件(css, js, 图片等文件)的地址都应该以 ‘/media/’ 开头。如:
/media/css/blueprint.css
/media/js/jquery-1.5.min.js
/media/images/logo.png
/media/plugins/menu/css/menustyle.css



安装

假设环境是Ubuntu 10.04,从前到后依次执行。

??? 安装gcc等基本编译环境

apt-get install build-essential

??? 安装lua解释器,头文件,基本库,luarocks

apt-get install lua5.1 liblua5.1-0 liblua5.1-0-dev luarocks

??? 安装uuid-dev等,被zeromq及后面的程序需要

apt-get install uuid-dev sqlite3 libsqlite3-dev git-core

??? 到zeromq官网(http://www.zeromq.org/)上下载最新稳定版的安装包,比如是zeromq-2.0.10.tar.gz。

tar xvf zeromq-2.0.10.tar.gz

cd zeromq-2.0.10

./configure??? 这个过程如果发现什么依赖不完全的,就请apt-get手动装上

make

make install??? 会将zmq库文件安装到/usr/local/lib/下面

??? 执行一下 ldconfig,刷新系统的库依赖树;


??? 到mongrel2官网(http://mongrel2.org/home)上下载最新稳定版的安装包,比如是:mongrel2-1.5.tar.bz2

tar xvf mongrel2-1.5.tar.bz2

cd mongrel2-1.5

make????? ?

make install

??? 到redis官网上下载最新稳定版的安装包,比如是:redis-2.2.2.tar.gz

tar xvf redis-2.2.2.tar.gz

cd redis-2.2.2

make

make install

至此,通过apt及源代码安装的部分就完成了。

下面开始执行rocks包的安装。下载附件包BUILD-110322.tar.gz,解压,进入目录,执行

./bamboo-install.sh
如果顺利执行完成后,就安装成功了。
服务配置及启动
>> 启动redis-server

  启动时,最好修改一下配置参数,见下面的redis.conf内容。将此文件(没有的可以从这里拷贝,保存成redis.conf文件)拷贝到/etc下面。执行 redis-server /etc/redis.conf 启动数据库进程。这时,进程会以daemon的形式启动,不占用某一个终端。另外,可以设置成开机自启动,这个设置不在本文档的讨论范围内。

这是测试过的redis.conf的内容
daemonize yes
pidfile /var/run/redis.pid
port 6379
timeout 0
loglevel verbose
logfile stdout
databases 256
save 900 1
save 300 10
save 60 10000
rdbcompression yes
dbfilename dump.rdb
dir /var/local/
appendonly no
appendfsync everysec
vm-enabled no
vm-swap-file /tmp/redis.swap
vm-max-memory 0
vm-page-size 32
vm-pages 134217728
vm-max-threads 4
hash-max-zipmap-entries 64
hash-max-zipmap-value 512
activerehashing yes


附上一个完整版的
# Redis configuration file example

# Note on units: when memory size is needed, it is possible to specifiy
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis.pid

# Accept connections on the specified port, default is 6379
port 6379

# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
#
# bind 127.0.0.1

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# Set server verbosity to 'debug'
# it can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel verbose

# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile stdout

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 256

################################ SNAPSHOTTING? #################################
#
# Save the DB on disk:
#
#?? save <seconds> <changes>
#
#?? Will save the DB if both the given number of seconds and the given
#?? number of write operations against the DB occurred.
#
#?? In the example below the behaviour will be to save:
#?? after 900 sec (15 min) if at least 1 key changed
#?? after 300 sec (5 min) if at least 10 keys changed
#?? after 60 sec if at least 10000 keys changed
#
#?? Note: you can disable saving at all commenting all the "save" lines.

save 900 1
save 300 10
save 60 10000

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# Also the Append Only File will be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/local/

################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#
# slaveof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.? This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limits.
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 128

# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
#
# maxmemory <bytes>

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.aof. This file will
# be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.

appendonly no

# The name of the append only file (default: "appendonly.aof")
# appendfilename appendonly.aof

# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
#
# The default is "everysec" that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

################################ VIRTUAL MEMORY ###############################

# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
#
# To enable VM just set 'vm-enabled' to yes, and set the following three
# VM parameters accordingly to your needs.

vm-enabled no
# vm-enabled yes

# This is the path of the Redis swap file. As you can guess, swap files
# can't be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
#
# The best kind of storage for the Redis swap file (that's accessed at random)
# is a Solid State Disk (SSD).
#
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
vm-swap-file /tmp/redis.swap

# vm-max-memory configures the VM to use at max the specified amount of
# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
# is, if there is still enough contiguous space in the swap file.
#
# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it's
# better to leave some margin. For instance specify an amount of RAM
# that's more or less between 60 and 80% of your free RAM.
vm-max-memory 0

# Redis swap files is split into pages. An object can be saved using multiple
# contiguous pages, but pages can't be shared between different objects.
# So if your page is too big, small objects swapped out on disk will waste
# a lot of space. If you page is too small, there is less space in the swap
# file (assuming you configured the same number of total swap file pages).
#
# If you use a lot of small objects, use a page size of 64 or 32 bytes.
# If you use a lot of big objects, use a bigger page size.
# If unsure, use the default :)
vm-page-size 32

# Number of total memory pages in the swap file.
# Given that the page table (a bitmap of free/used pages) is taken in memory,
# every 8 pages on disk will consume 1 byte of RAM.
#
# The total swap size is vm-page-size * vm-pages
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
#
# It's better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
vm-pages 134217728

# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can't help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
#
# The special value of 0 turn off threaded I/O and enables the blocking
# Virtual Memory implementation.
vm-max-threads 4

############################### ADVANCED CONFIG ###############################

# Glue small output buffers together in order to send small replies in a
# single TCP packet. Uses a bit more CPU but most of the times it is a win
# in terms of number of queries per second. Use 'yes' if unsure.
glueoutputbuf yes

# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
hash-max-zipmap-entries 64
hash-max-zipmap-value 512

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

################################## INCLUDES ###################################

# Include one or more other config files here.? This is useful if you
# have a standard template that goes to all redis server but also need
# to customize a few per-server settings.? Include files can include
# other files, so use this wisely.
#
# include /path/to/local.conf
# include /path/to/other.conf



>> 启动mongrel2 web服务器




??? 下载附件中的mongrel2服务器的配置包monserver-example.tar.gz,解压后,执行 cmds/projectconfig.sh和cmds/startserver.sh,就启动了服务器;
??? 下载附件中的handler-example.tar.gz,解压后,进入这个目录,执行,bamboo start,就启动了这个应用示例;
??? 在浏览器中输入http://localhost:6767/,就可以看到示例页面了。

写得很粗略,后面不断增加内容以及做修正。大家可以试着安装,遗漏的地方,以及不正确的地方,请及时提出。