Dubbo入門到精通學習筆記(八):ActiveMQ的安裝與使用(單節點)、Redis的安裝與使用(單節點)、FastDFS分佈式文件系統的安裝與使用(單節點)

ActiveMQ的安裝與使用(單節點)

安裝(單節點)

IP:192.168.4.101 環境:CentOS 6.六、JDK7
一、 安裝 JDK 並配置環境變量(略)html

JAVA_HOME=/usr/local/java/jdk1.7.0_72

二、 下載 Linux 版的 ActiveMQ(當前最新版 apache-activemq-5.11.1-bin.tar.gz)java

$ wget http://apache.fayea.com/activemq/5.11.1/apache-activemq-5.11.1-bin.tar.gz

三、 解壓安裝mysql

$ tar -zxvf apache-activemq-5.11.1-bin.tar.gz
$ mv apache-activemq-5.11.1 activemq-01
若是啓動腳本 activemq 沒有可執行權限,此時則須要受權(此步可選) 
$ cd /home/wusc/activemq-01/bin/
$ chmod 755 ./activemq

四、 防火牆中打開對應的端口
ActiveMQ 須要用到兩個端口
一個是消息通信的端口(默認爲 61616)
一個是管理控制檯端口(默認爲 8161)可在 conf/jetty.xml 中修改,以下:linux

<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
          <!-- the default port number for the web console -->
          <property name="host" value="0.0.0.0"/>
          <property name="port" value="8161"/>
</bean>
# vi /etc/sysconfig/iptables

添加:nginx

-A INPUT -m state --state NEW -m tcp -p tcp --dport 61616 -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8161 -j ACCEPT

重啓防火牆:git

# service iptables restart

五、 啓動github

$ cd /home/wusc/activemq-01/bin 
$ ./activemq start

六、 打開管理界面:http://192.168.4.101:8161
在這裏插入圖片描述
默認用戶名和密碼爲:admin/admin
登陸後進入
在這裏插入圖片描述
七、 安全配置(消息安全)
ActiveMQ 若是不加入安全機制的話,任何人只要知道消息服務的具體地址(包括 ip,端口,消息地址 [隊列或者主題地址],),均可以肆無忌憚的發送、接收消息。關於 ActiveMQ 安裝配置
http://activemq.apache.org/security.html
ActiveMQ 的消息安全配置策略有多種,咱們以簡單受權配置爲例:
在 conf/activemq.xml 文件中在 broker 標籤最後加入如下內容便可:web

用於在java程序中配置的鏈接用戶名密碼redis

$ vi /home/wusc/activemq-01/conf/activemq.xml
<plugins>
     <simpleAuthenticationPlugin>
     <users>
         <authenticationUser username="wusc" password="wusc.123" groups="users,admins"/>
     </users>
     </simpleAuthenticationPlugin>
</plugins>

定義了一個 wusc 用戶,密碼爲 wusc.123,角色爲 users,admins
設置 admin 的用戶名和密碼:spring

$ vi /home/wusc/activemq-01/conf/jetty.xml
<bean id="securityConstraint" class="org.eclipse.jetty.util.security.Constraint">
    <property name="name" value="BASIC" />
    <property name="roles" value="admin" />
    <property name="authenticate" value="true" />
</bean>

確保 authenticate 的值爲 true(默認),管控臺的權限
控制檯的登陸用戶名密碼保存在 conf/jetty-realm.properties 文件中,內容以下:

$ vi /home/wusc/activemq-01/conf/jetty-realm.properties
# Defines users that can access the web (console, demo, etc.)
# username: password [,rolename ...]
admin: wusc.123, admin

注意:用戶名和密碼的格式是
用戶名 : 密碼 ,角色名
重啓:

$ /home/wusc/activemq-01/bin/activemq restart

設置開機啓動:

# vi /etc/rc.local

加入如下內容

## ActiveMQ
su - wusc -c '/home/wusc/activemq-01/bin/activemq start'

八、 MQ 消息生產者也與消息消費者的 Demo 樣例講解與演示

使用

一個發送郵件的小demo

目錄結構

在這裏插入圖片描述
在這裏插入圖片描述

edu-common-parent

pom.xml

<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>wusc.edu.common</groupId>
	<artifactId>edu-common-parent</artifactId>
	<version>1.0-SNAPSHOT</version>
	<packaging>pom</packaging>

	<name>edu-common-parent</name>
	<url>http://maven.apache.org</url>

	<distributionManagement>
		<repository>
			<id>nexus-releases</id>
			<name>Nexus Release Repository</name>
			<url>http://192.168.4.221:8081/nexus/content/repositories/releases/</url>
		</repository>
		<snapshotRepository>
			<id>nexus-snapshots</id>
			<name>Nexus Snapshot Repository</name>
			<url>http://192.168.4.221:8081/nexus/content/repositories/snapshots/</url>
		</snapshotRepository>
	</distributionManagement>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		
		<!-- common projects -->
		<edu-common.version>1.0-SNAPSHOT</edu-common.version>
		<edu-common-config.version>1.0-SNAPSHOT</edu-common-config.version>
		<edu-common-core.version>1.0-SNAPSHOT</edu-common-core.version>
		<edu-common-web.version>1.0-SNAPSHOT</edu-common-web.version>

		<edu-demo.version>1.0-SNAPSHOT</edu-demo.version>
		
		<!-- facade projects -->
		<!-- 用戶服務接口 -->
		<edu-facade-user.version>1.0-SNAPSHOT</edu-facade-user.version>
		<!-- 帳戶服務接口 -->
		<edu-facade-account.version>1.0-SNAPSHOT</edu-facade-account.version>
		<!-- 訂單服務接口 -->
		<edu-facade-order.version>1.0-SNAPSHOT</edu-facade-order.version>
		<!-- 運營服務接口 -->
		<edu-facade-operation.version>1.0-SNAPSHOT</edu-facade-operation.version>
		<!-- 消息隊列服務接口 -->
		<edu-facade-queue.version>1.0-SNAPSHOT</edu-facade-queue.version>
		
		<!-- service projects -->
		<!-- 用戶服務 -->
		<edu-service-user.version>1.0-SNAPSHOT</edu-service-user.version>
		<!-- 帳戶服務 -->
		<edu-service-account.version>1.0-SNAPSHOT</edu-service-account.version>
		<!-- 訂單服務 -->
		<edu-service-order.version>1.0-SNAPSHOT</edu-service-order.version>
		<!-- 運營服務 -->
		<edu-service-operation.version>1.0-SNAPSHOT</edu-service-operation.version>
		<!-- 消息隊列服務 -->
		<edu-service-queue.version>1.0-SNAPSHOT</edu-service-queue.version>
		
		<!-- web projects -->
		<!-- 運營 -->
		<edu-web-operation.version>1.0-SNAPSHOT</edu-web-operation.version>
		<!-- 門戶 -->
		<edu-web-portal.version>1.0-SNAPSHOT</edu-web-portal.version>
		<!-- 網關 -->
		<edu-web-gateway.version>1.0-SNAPSHOT</edu-web-gateway.version>
		<!-- 模擬商城 -->
		<edu-web-shop.version>1.0-SNAPSHOT</edu-web-shop.version>
		
		<!-- app projects -->
		
		<!-- timer projects -->

		<!-- frameworks -->
		<org.springframework.version>3.2.4.RELEASE</org.springframework.version>
		<org.apache.struts.version>2.3.15.1</org.apache.struts.version>

	</properties>

	<dependencies>
		<!-- Test Dependency Begin -->
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.11</version>
		</dependency>
		<!-- Test Dependency End -->
	</dependencies>

	<dependencyManagement>
		<dependencies>
			<!-- Common Dependency Begin -->
			<dependency>
				<groupId>xalan</groupId>
				<artifactId>xalan</artifactId>
				<version>2.7.1</version>
			</dependency>
			<dependency>
				<groupId>antlr</groupId>
				<artifactId>antlr</artifactId>
				<version>2.7.6</version>
			</dependency>
			<dependency>
				<groupId>aopalliance</groupId>
				<artifactId>aopalliance</artifactId>
				<version>1.0</version>
			</dependency>
			<dependency>
				<groupId>org.aspectj</groupId>
				<artifactId>aspectjweaver</artifactId>
				<version>1.7.3</version>
			</dependency>
			<dependency>
				<groupId>cglib</groupId>
				<artifactId>cglib</artifactId>
				<version>2.2.2</version>
			</dependency>
			<dependency>
				<groupId>asm</groupId>
				<artifactId>asm</artifactId>
				<version>3.3.1</version>
			</dependency>
			<dependency>
				<groupId>net.sf.json-lib</groupId>
				<artifactId>json-lib</artifactId>
				<version>2.3</version>
				<classifier>jdk15</classifier>
				<scope>compile</scope>
			</dependency>
			<dependency>
				<groupId>org.codehaus.jackson</groupId>
				<artifactId>jackson-core-asl</artifactId>
				<version>1.9.13</version>
			</dependency>
			<dependency>
				<groupId>org.codehaus.jackson</groupId>
				<artifactId>jackson-mapper-asl</artifactId>
				<version>1.9.13</version>
			</dependency>
			<dependency>
				<groupId>ognl</groupId>
				<artifactId>ognl</artifactId>
				<version>3.0.6</version>
			</dependency>
			<dependency>
				<groupId>oro</groupId>
				<artifactId>oro</artifactId>
				<version>2.0.8</version>
			</dependency>
			<dependency>
				<groupId>commons-net</groupId>
				<artifactId>commons-net</artifactId>
				<version>3.2</version>
			</dependency>
			<dependency>
				<groupId>commons-beanutils</groupId>
				<artifactId>commons-beanutils</artifactId>
				<version>1.8.0</version>
			</dependency>
			<dependency>
				<groupId>commons-codec</groupId>
				<artifactId>commons-codec</artifactId>
				<version>1.8</version>
			</dependency>
			<dependency>
				<groupId>commons-collections</groupId>
				<artifactId>commons-collections</artifactId>
				<version>3.2</version>
			</dependency>
			<dependency>
				<groupId>commons-digester</groupId>
				<artifactId>commons-digester</artifactId>
				<version>2.0</version>
			</dependency>
			<dependency>
				<groupId>commons-fileupload</groupId>
				<artifactId>commons-fileupload</artifactId>
				<version>1.3.1</version>
			</dependency>
			<dependency>
				<groupId>commons-io</groupId>
				<artifactId>commons-io</artifactId>
				<version>2.0.1</version>
			</dependency>
			<dependency>
				<groupId>org.apache.commons</groupId>
				<artifactId>commons-lang3</artifactId>
				<version>3.1</version>
			</dependency>
			<dependency>
				<groupId>commons-logging</groupId>
				<artifactId>commons-logging</artifactId>
				<version>1.1.3</version>
			</dependency>
			<dependency>
				<groupId>commons-validator</groupId>
				<artifactId>commons-validator</artifactId>
				<version>1.1.4</version>
			</dependency>
			<dependency>
				<groupId>commons-cli</groupId>
				<artifactId>commons-cli</artifactId>
				<version>1.2</version>
			</dependency>
			<dependency>
				<groupId>dom4j</groupId>
				<artifactId>dom4j</artifactId>
				<version>1.6.1</version>
			</dependency>
			<dependency>
				<groupId>net.sf.ezmorph</groupId>
				<artifactId>ezmorph</artifactId>
				<version>1.0.6</version>
			</dependency>
			<dependency>
				<groupId>javassist</groupId>
				<artifactId>javassist</artifactId>
				<version>3.12.1.GA</version>
			</dependency>
			<dependency>
				<groupId>jstl</groupId>
				<artifactId>jstl</artifactId>
				<version>1.2</version>
			</dependency>
			<dependency>
				<groupId>javax.transaction</groupId>
				<artifactId>jta</artifactId>
				<version>1.1</version>
			</dependency>
			<dependency>
				<groupId>log4j</groupId>
				<artifactId>log4j</artifactId>
				<version>1.2.17</version>
			</dependency>
			<dependency>
				<groupId>org.slf4j</groupId>
				<artifactId>slf4j-api</artifactId>
				<version>1.7.5</version>
			</dependency>
			<dependency>
				<groupId>org.slf4j</groupId>
				<artifactId>slf4j-log4j12</artifactId>
				<version>1.7.5</version>
			</dependency>
			<dependency>
				<groupId>net.sourceforge.jexcelapi</groupId>
				<artifactId>jxl</artifactId>
				<version>2.6.12</version>
			</dependency>
			<!-- <dependency> <groupId>com.alibaba.external</groupId> <artifactId>sourceforge.spring</artifactId> 
				<version>2.0.1</version> </dependency> <dependency> <groupId>com.alibaba.external</groupId> 
				<artifactId>jakarta.commons.poolg</artifactId> <version>1.3</version> </dependency> -->
			<dependency>
				<groupId>org.jdom</groupId>
				<artifactId>jdom</artifactId>
				<version>1.1.3</version>
			</dependency>
			<dependency>
				<groupId>jaxen</groupId>
				<artifactId>jaxen</artifactId>
				<version>1.1.1</version>
			</dependency>
			<dependency>
				<groupId>com.alibaba</groupId>
				<artifactId>dubbo</artifactId>
				<version>2.5.3</version>
			</dependency>
			<dependency>
				<groupId>redis.clients</groupId>
				<artifactId>jedis</artifactId>
				<version>2.4.2</version>
			</dependency>

			<!-- Common Dependency End -->

			<!-- Zookeeper 用於分佈式服務管理 -->
			<dependency>
				<groupId>org.apache.zookeeper</groupId>
				<artifactId>zookeeper</artifactId>
				<version>3.4.5</version>
			</dependency>
			<dependency>
				<groupId>com.101tec</groupId>
				<artifactId>zkclient</artifactId>
				<version>0.3</version>
			</dependency>
			<!-- Zookeeper 用於分佈式服務管理 end -->


			<!-- Spring Dependency Begin -->
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-aop</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-aspects</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-beans</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-context</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-context-support</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-core</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-expression</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-instrument</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-instrument-tomcat</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-jdbc</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-jms</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-orm</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-oxm</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-struts</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-test</artifactId>
				<version>${org.springframework.version}</version>
				<scope>test</scope>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-tx</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-web</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-webmvc</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<dependency>
				<groupId>org.springframework</groupId>
				<artifactId>spring-webmvc-portlet</artifactId>
				<version>${org.springframework.version}</version>
			</dependency>
			<!-- Spring Dependency End -->

			<!-- MyBatis Dependency Begin -->
			<dependency>
				<groupId>org.mybatis</groupId>
				<artifactId>mybatis</artifactId>
				<version>3.2.8</version>
			</dependency>
			<dependency>
				<groupId>org.mybatis</groupId>
				<artifactId>mybatis-spring</artifactId>
				<version>1.2.2</version>
			</dependency>
			<!-- MyBatis Dependency End -->

			<!-- Mysql Driver Begin -->
			<dependency>
				<groupId>mysql</groupId>
				<artifactId>mysql-connector-java</artifactId>
				<version>5.1.32</version>
			</dependency>
			<!-- Mysql Driver End -->

			<!-- Struts2 Dependency Begin -->
			<dependency>
				<groupId>org.apache.struts</groupId>
				<artifactId>struts2-json-plugin</artifactId>
				<version>${org.apache.struts.version}</version>
			</dependency>
			<dependency>
				<groupId>org.apache.struts</groupId>
				<artifactId>struts2-convention-plugin</artifactId>
				<version>${org.apache.struts.version}</version>
			</dependency>
			<dependency>
				<groupId>org.apache.struts</groupId>
				<artifactId>struts2-core</artifactId>
				<version>${org.apache.struts.version}</version>
			</dependency>
			<dependency>
				<groupId>org.apache.struts</groupId>
				<artifactId>struts2-spring-plugin</artifactId>
				<version>${org.apache.struts.version}</version>
			</dependency>
			<dependency>
				<groupId>org.apache.struts.xwork</groupId>
				<artifactId>xwork-core</artifactId>
				<version>${org.apache.struts.version}</version>
			</dependency>
			<!-- Struts2 Dependency End -->

			<!-- Others Begin -->
			<dependency>
				<groupId>google.code</groupId>
				<artifactId>kaptcha</artifactId>
				<version>2.3.2</version>
			</dependency>
			<dependency>
				<groupId>org.apache.tomcat</groupId>
				<artifactId>servlet-api</artifactId>
				<version>6.0.37</version>
			</dependency>
			<dependency>
				<groupId>org.apache.tomcat</groupId>
				<artifactId>jsp-api</artifactId>
				<version>6.0.37</version>
			</dependency>
			<dependency>
				<groupId>org.freemarker</groupId>
				<artifactId>freemarker</artifactId>
				<version>2.3.19</version>
			</dependency>
			<dependency>
				<groupId>com.alibaba</groupId>
				<artifactId>druid</artifactId>
				<version>1.0.12</version>
			</dependency>
			<dependency>
				<groupId>com.alibaba</groupId>
				<artifactId>fastjson</artifactId>
				<version>1.1.41</version>
			</dependency>
			<dependency>
				<groupId>org.apache.httpcomponents</groupId>
				<artifactId>httpclient</artifactId>
				<version>4.3.3</version>
			</dependency>
			<dependency>
				<groupId>org.jboss.netty</groupId>
				<artifactId>netty</artifactId>
				<version>3.2.5.Final</version>
			</dependency>
			<dependency>
				<groupId>org.apache.activemq</groupId>
				<artifactId>activemq-all</artifactId>
				<version>5.11.1</version>
			</dependency>
			<dependency>
				<groupId>org.apache.activemq</groupId>
				<artifactId>activemq-pool</artifactId>
				<version>5.11.1</version>
			</dependency>
			<!-- Others End -->


			<dependency>
				<groupId>org.jsoup</groupId>
				<artifactId>jsoup</artifactId>
				<version>1.7.3</version>
			</dependency>


		</dependencies>
	</dependencyManagement>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-deploy-plugin</artifactId>
				<version>2.7</version>
				<configuration>
					<uniqueVersion>false</uniqueVersion>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-eclipse-plugin</artifactId>
				<version>2.8</version>
			</plugin>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<version>2.3.2</version>
				<configuration>
					<failOnError>true</failOnError>
					<verbose>true</verbose>
					<fork>true</fork>
					<compilerArgument>-nowarn</compilerArgument>
					<source>1.6</source>
					<target>1.6</target>
					<encoding>UTF-8</encoding>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-source-plugin</artifactId>
				<version>2.1.2</version>
				<executions>
					<execution>
						<id>attach-sources</id>
						<goals>
							<goal>jar</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

</project>

edu-demo-mqproducer

pom.xml

<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<parent>
		<groupId>wusc.edu.common</groupId>
		<artifactId>edu-common-parent</artifactId>
		<version>1.0-SNAPSHOT</version>
		<relativePath>../edu-common-parent</relativePath>
	</parent>

	<groupId>wusc.edu.mqtest</groupId>
	<artifactId>edu-demo-mqproducer</artifactId>
	<version>1.0-SNAPSHOT</version>
	<packaging>war</packaging>

	<name>edu-demo-mqproducer</name>
	<url>http://maven.apache.org</url>

	<build>
		<finalName>edu-demo-mqproducer</finalName>
		<resources>
			<resource>
				<targetPath>${project.build.directory}/classes</targetPath>
				<directory>src/main/resources</directory>
				<filtering>true</filtering>
				<includes>
					<include>**/*.xml</include> <include>**/*.properties</include>
				</includes>
			</resource>
		</resources>
	</build>

	<dependencies>

		<!-- Common Dependency Begin -->
		<dependency>
			<groupId>antlr</groupId>
			<artifactId>antlr</artifactId>
		</dependency>
		<dependency>
			<groupId>aopalliance</groupId>
			<artifactId>aopalliance</artifactId>
		</dependency>
		<dependency>
			<groupId>org.aspectj</groupId>
			<artifactId>aspectjweaver</artifactId>
		</dependency>
		<dependency>
			<groupId>cglib</groupId>
			<artifactId>cglib</artifactId>
		</dependency>
		<dependency>
			<groupId>net.sf.json-lib</groupId>
			<artifactId>json-lib</artifactId>
			<classifier>jdk15</classifier>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>ognl</groupId>
			<artifactId>ognl</artifactId>
		</dependency>
		<dependency>
			<groupId>oro</groupId>
			<artifactId>oro</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-beanutils</groupId>
			<artifactId>commons-beanutils</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-codec</groupId>
			<artifactId>commons-codec</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-collections</groupId>
			<artifactId>commons-collections</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-digester</groupId>
			<artifactId>commons-digester</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-fileupload</groupId>
			<artifactId>commons-fileupload</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-io</groupId>
			<artifactId>commons-io</artifactId>
		</dependency>
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-lang3</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-validator</groupId>
			<artifactId>commons-validator</artifactId>
		</dependency>
		<dependency>
			<groupId>dom4j</groupId>
			<artifactId>dom4j</artifactId>
		</dependency>
		<dependency>
			<groupId>net.sf.ezmorph</groupId>
			<artifactId>ezmorph</artifactId>
		</dependency>
		<dependency>
			<groupId>javassist</groupId>
			<artifactId>javassist</artifactId>
		</dependency>
		<dependency>
			<groupId>log4j</groupId>
			<artifactId>log4j</artifactId>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-api</artifactId>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-log4j12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.alibaba</groupId>
			<artifactId>fastjson</artifactId>
		</dependency>

		<!-- Common Dependency End -->

		<!-- Spring Dependency Begin -->
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-aop</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-aspects</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-beans</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-context</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-context-support</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-core</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-jms</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-orm</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-oxm</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-tx</artifactId>
		</dependency>

		<!-- Spring Dependency End -->


		<dependency>
			<groupId>org.apache.activemq</groupId>
			<artifactId>activemq-all</artifactId>
		</dependency>
		<dependency>
			<groupId>org.apache.activemq</groupId>
			<artifactId>activemq-pool</artifactId>
		</dependency>


	</dependencies>


</project>

mq.properties

## MQ
mq.brokerURL=tcp\://192.168.4.101\:61616
mq.userName=wusc
mq.password=wusc.123
mq.pool.maxConnections=10
#queueName
queueName=wusc.edu.mqtest.v1

spring-mq.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
	xmlns:context="http://www.springframework.org/schema/context"
	xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/beans  
           http://www.springframework.org/schema/beans/spring-beans-3.2.xsd  
           http://www.springframework.org/schema/aop   
           http://www.springframework.org/schema/aop/spring-aop-3.2.xsd  
           http://www.springframework.org/schema/tx  
           http://www.springframework.org/schema/tx/spring-tx-3.2.xsd  
           http://www.springframework.org/schema/context  
           http://www.springframework.org/schema/context/spring-context-3.2.xsd"
	default-autowire="byName" default-lazy-init="false">
	
	<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->

	<!-- 真正能夠產生Connection的ConnectionFactory,由對應的 JMS服務廠商提供 -->
	<bean id="targetConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
		<!-- ActiveMQ服務地址 -->
		//這裏經過上面配置中的信息進行自動的讀取
        <property name="brokerURL" value="${mq.brokerURL}" />
        <property name="userName" value="${mq.userName}"></property>
        <property name="password" value="${mq.password}"></property> 
	</bean>

    <!-- 
    	ActiveMQ爲咱們提供了一個PooledConnectionFactory,經過往裏面注入一個ActiveMQConnectionFactory
    	能夠用來將Connection、Session和MessageProducer池化,這樣能夠大大的減小咱們的資源消耗。
    	要依賴於 activemq-pool包
     -->
	<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
		<property name="connectionFactory" ref="targetConnectionFactory" />
		//鏈接池數
		<property name="maxConnections" value="${mq.pool.maxConnections}" />
	</bean>

	<!-- Spring用於管理真正的ConnectionFactory的ConnectionFactory -->
	<bean id="connectionFactory" class="org.springframework.jms.connection.SingleConnectionFactory">
		<!-- 目標ConnectionFactory對應真實的能夠產生JMS Connection的ConnectionFactory -->
		<property name="targetConnectionFactory" ref="pooledConnectionFactory" />
	</bean>
	
	<!-- Spring提供的JMS工具類,它能夠進行消息發送、接收等 -->
	
	<!-- 隊列模板 -->
	<bean id="activeMqJmsTemplate" class="org.springframework.jms.core.JmsTemplate">  
	    <!-- 這個connectionFactory對應的是咱們定義的Spring提供的那個ConnectionFactory對象 -->  
	    <property name="connectionFactory" ref="connectionFactory"/>  
	    <property name="defaultDestinationName" value="${queueName}"></property>
	</bean> 

</beans>

spring-context.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop"
	xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/beans  
           http://www.springframework.org/schema/beans/spring-beans-3.2.xsd  
           http://www.springframework.org/schema/aop   
           http://www.springframework.org/schema/aop/spring-aop-3.2.xsd  
           http://www.springframework.org/schema/tx  
           http://www.springframework.org/schema/tx/spring-tx-3.2.xsd  
           http://www.springframework.org/schema/context  
           http://www.springframework.org/schema/context/spring-context-3.2.xsd"
	default-autowire="byName" default-lazy-init="false">
	
	<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->

	<!-- 採用註釋的方式配置bean -->
	<context:annotation-config />

	<!-- 配置要掃描的包 -->
	<context:component-scan base-package="wusc.edu.demo" />

	<!-- 讀入配置屬性文件 -->
	<context:property-placeholder location="classpath:mq.properties" />

	<!-- proxy-target-class默認"false",更改成"ture"使用CGLib動態代理 -->
	<aop:aspectj-autoproxy proxy-target-class="true" />	
	
	<import resource="spring-mq.xml" />
</beans>

MailParam.java

/** * 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 . */
package wusc.edu.demo.mqtest.params;

public class MailParam {

	/** 發件人 **/
	private String from;
	/** 收件人 **/
	private String to;
	/** 主題 **/
	private String subject;
	/** 郵件內容 **/
	private String content;

	public MailParam() {
	}

	public MailParam(String to, String subject, String content) {
		this.to = to;
		this.subject = subject;
		this.content = content;
	}

	public String getFrom() {
		return from;
	}

	public void setFrom(String from) {
		this.from = from;
	}

	public String getTo() {
		return to;
	}

	public void setTo(String to) {
		this.to = to;
	}

	public String getSubject() {
		return subject;
	}

	public void setSubject(String subject) {
		this.subject = subject;
	}

	public String getContent() {
		return content;
	}

	public void setContent(String content) {
		this.content = content;
	}
}

MqProducer.java

/** * 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 . */
package wusc.edu.demo.mqtest;

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Session;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.core.MessageCreator;
import org.springframework.stereotype.Service;

import com.alibaba.fastjson.JSONObject;

import wusc.edu.demo.mqtest.params.MailParam;


@Service("mqProducer")//在這裏注入了spring-mq.xml中的bean(serviceMqJmsTemplate)
public class MQProducer {
	
	@Autowired
	private JmsTemplate activeMqJmsTemplate;

	/** * 發送消息. * @param mail */
	public void sendMessage(final MailParam mail) {
		activeMqJmsTemplate.send(new MessageCreator() {
			public Message createMessage(Session session) throws JMSException {
				return session.createTextMessage(JSONObject.toJSONString(mail));
			}
		});
		
	}

}

MqProducerTest.java

/** 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 **/
package wusc.edu.demo.mqtest;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import wusc.edu.demo.mqtest.params.MailParam;

public class MQProducerTest {
	private static final Log log = LogFactory.getLog(MQProducerTest.class);

	public static void main(String[] args) {
		try {
			ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring-context.xml");
			context.start();

			MQProducer mqProducer = (MQProducer) context.getBean("mqProducer");
			// 郵件發送
			MailParam mail = new MailParam();
			mail.setTo("wu-sc@foxmail.com");
			mail.setSubject("ActiveMQ測試");
			mail.setContent("經過ActiveMQ異步發送郵件!");

			mqProducer.sendMessage(mail);

			context.stop();
		} catch (Exception e) {
			log.error("==>MQ context start error:", e);
			System.exit(0);
		} finally {
			log.info("===>System.exit");
			System.exit(0);
		}
	}
}

edu-demo-mqconsumer

pom.xml

<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<parent>
		<groupId>wusc.edu.common</groupId>
		<artifactId>edu-common-parent</artifactId>
		<version>1.0-SNAPSHOT</version>
		<relativePath>../edu-common-parent</relativePath>
	</parent>

	<groupId>wusc.edu.mqtest</groupId>
	<artifactId>edu-demo-mqconsumer</artifactId>
	<version>1.0-SNAPSHOT</version>
	<packaging>war</packaging>

	<name>edu-demo-mqconsumer</name>
	<url>http://maven.apache.org</url>

	<build>
		<finalName>edu-demo-mqconsumer</finalName>
		<resources>
			<resource>
				<targetPath>${project.build.directory}/classes</targetPath>
				<directory>src/main/resources</directory>
				<filtering>true</filtering>
				<includes>
					<include>**/*.xml</include> <include>**/*.properties</include>
				</includes>
			</resource>
		</resources>
	</build>

	<dependencies>

		<!-- Common Dependency Begin -->
		<dependency>
			<groupId>antlr</groupId>
			<artifactId>antlr</artifactId>
		</dependency>
		<dependency>
			<groupId>aopalliance</groupId>
			<artifactId>aopalliance</artifactId>
		</dependency>
		<dependency>
			<groupId>org.aspectj</groupId>
			<artifactId>aspectjweaver</artifactId>
		</dependency>
		<dependency>
			<groupId>cglib</groupId>
			<artifactId>cglib</artifactId>
		</dependency>
		<dependency>
			<groupId>net.sf.json-lib</groupId>
			<artifactId>json-lib</artifactId>
			<classifier>jdk15</classifier>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>ognl</groupId>
			<artifactId>ognl</artifactId>
		</dependency>
		<dependency>
			<groupId>oro</groupId>
			<artifactId>oro</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-beanutils</groupId>
			<artifactId>commons-beanutils</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-codec</groupId>
			<artifactId>commons-codec</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-collections</groupId>
			<artifactId>commons-collections</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-digester</groupId>
			<artifactId>commons-digester</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-fileupload</groupId>
			<artifactId>commons-fileupload</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-io</groupId>
			<artifactId>commons-io</artifactId>
		</dependency>
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-lang3</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging</artifactId>
		</dependency>
		<dependency>
			<groupId>commons-validator</groupId>
			<artifactId>commons-validator</artifactId>
		</dependency>
		<dependency>
			<groupId>dom4j</groupId>
			<artifactId>dom4j</artifactId>
		</dependency>
		<dependency>
			<groupId>net.sf.ezmorph</groupId>
			<artifactId>ezmorph</artifactId>
		</dependency>
		<dependency>
			<groupId>javassist</groupId>
			<artifactId>javassist</artifactId>
		</dependency>
		<dependency>
			<groupId>log4j</groupId>
			<artifactId>log4j</artifactId>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-api</artifactId>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-log4j12</artifactId>
		</dependency>
		<dependency>
			<groupId>com.alibaba</groupId>
			<artifactId>fastjson</artifactId>
		</dependency>

		<!-- Common Dependency End -->

		<!-- Spring Dependency Begin -->
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-aop</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-aspects</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-beans</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-context</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-context-support</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-core</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-jms</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-orm</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-oxm</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-tx</artifactId>
		</dependency>

		<!-- Spring Dependency End -->


		<dependency>
			<groupId>org.apache.activemq</groupId>
			<artifactId>activemq-all</artifactId>
		</dependency>
		<dependency>
			<groupId>org.apache.activemq</groupId>
			<artifactId>activemq-pool</artifactId>
		</dependency>

		<dependency>
			<groupId>javax.mail</groupId>
			<artifactId>mail</artifactId>
			<version>1.4.7</version>
		</dependency>


	</dependencies>


</project>

mq.properties

## MQ
mq.brokerURL=tcp\://192.168.4.101\:61616
mq.userName=wusc
mq.password=wusc.123
mq.pool.maxConnections=10
#queueName
queueName=wusc.edu.mqtest.v1

mail.properties

#SMTP服務配置
mail.host=smtp.qq.com
mail.port=25
mail.username=XXX@qq.com
mail.password=XXXX
mail.smtp.auth=true
mail.smtp.timeout=30000
mail.default.from=XXXXX@qq.com

spring-mq.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
	xmlns:context="http://www.springframework.org/schema/context"
	xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/beans  
           http://www.springframework.org/schema/beans/spring-beans-3.2.xsd  
           http://www.springframework.org/schema/aop   
           http://www.springframework.org/schema/aop/spring-aop-3.2.xsd  
           http://www.springframework.org/schema/tx  
           http://www.springframework.org/schema/tx/spring-tx-3.2.xsd  
           http://www.springframework.org/schema/context  
           http://www.springframework.org/schema/context/spring-context-3.2.xsd"
	default-autowire="byName" default-lazy-init="false">

	<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->
	
	<!-- 真正能夠產生Connection的ConnectionFactory,由對應的 JMS服務廠商提供 -->
	<bean id="targetConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
		<!-- ActiveMQ服務地址 -->
	    <property name="brokerURL" value="${mq.brokerURL}" />
	    <property name="userName" value="${mq.userName}"></property>
	    <property name="password" value="${mq.password}"></property> 
	</bean>
	
	<!-- 
		ActiveMQ爲咱們提供了一個PooledConnectionFactory,經過往裏面注入一個ActiveMQConnectionFactory
		能夠用來將Connection、Session和MessageProducer池化,這樣能夠大大的減小咱們的資源消耗。
		要依賴於 activemq-pool包
	 -->
	<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
		<property name="connectionFactory" ref="targetConnectionFactory" />
		<property name="maxConnections" value="${mq.pool.maxConnections}" />
	</bean>
	
	<!-- Spring用於管理真正的ConnectionFactory的ConnectionFactory -->
	<bean id="connectionFactory" class="org.springframework.jms.connection.SingleConnectionFactory">
		<!-- 目標ConnectionFactory對應真實的能夠產生JMS Connection的ConnectionFactory -->
		<property name="targetConnectionFactory" ref="pooledConnectionFactory" />
	</bean>
	
	<!-- Spring提供的JMS工具類,它能夠進行消息發送、接收等 -->
	
	<!-- 隊列模板 -->
	<bean id="activeMqJmsTemplate" class="org.springframework.jms.core.JmsTemplate">  
	    <!-- 這個connectionFactory對應的是咱們定義的Spring提供的那個ConnectionFactory對象 -->  
	    <property name="connectionFactory" ref="connectionFactory"/>  
	    <property name="defaultDestinationName" value="${queueName}"></property>
	</bean> 
	
	<!--這個是sessionAwareQueue目的地 -->
	<bean id="sessionAwareQueue" class="org.apache.activemq.command.ActiveMQQueue">
		<constructor-arg>
			<value>${queueName}</value>
		</constructor-arg>
	</bean>
	
	<!-- 能夠獲取session的MessageListener -->
	<bean id="consumerSessionAwareMessageListener" class="wusc.edu.demo.mqtest.listener.ConsumerSessionAwareMessageListener"></bean>
	//spring提供的監聽器
	<bean id="sessionAwareListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
		<property name="connectionFactory" ref="connectionFactory" />
		//監聽的隊列名字
		<property name="destination" ref="sessionAwareQueue" />
		<property name="messageListener" ref="consumerSessionAwareMessageListener" />
	</bean>

</beans>

配置文件中的關鍵點:sessionAwareQueue目的地 和 session的MessageListener

spring-mail.xml

<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
	xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
	xmlns:cache="http://www.springframework.org/schema/cache"
	xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
	   http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd
	   http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd
       http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd
       http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache-3.2.xsd">
       
    <!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->
	
	<!-- Spring提供的發送電子郵件的高級抽象類 -->
	<bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl">
		<property name="host" value="${mail.host}" />
		<property name="username" value="${mail.username}" />
		<property name="password" value="${mail.password}" />
		<property name="defaultEncoding" value="UTF-8"></property>
		<property name="javaMailProperties">
			<props>
				<prop key="mail.smtp.auth">${mail.smtp.auth}</prop>
				<prop key="mail.smtp.timeout">${mail.smtp.timeout}</prop>
			</props>
		</property>
	</bean>

	<bean id="simpleMailMessage" class="org.springframework.mail.SimpleMailMessage">
		<property name="from">
			<value>${mail.default.from}</value>
		</property>
	</bean>
	
	<!-- 配置線程池 -->
	<bean id="threadPool" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
		<!-- 線程池維護線程的最少數量 -->
		<property name="corePoolSize" value="5" />
		<!-- 線程池維護線程所容許的空閒時間 -->
		<property name="keepAliveSeconds" value="30000" />
		<!-- 線程池維護線程的最大數量 -->
		<property name="maxPoolSize" value="50" />
		<!-- 線程池所使用的緩衝隊列 -->
		<property name="queueCapacity" value="100" />
	</bean>

</beans>

spring-context.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop"
	xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/beans  
           http://www.springframework.org/schema/beans/spring-beans-3.2.xsd  
           http://www.springframework.org/schema/aop   
           http://www.springframework.org/schema/aop/spring-aop-3.2.xsd  
           http://www.springframework.org/schema/tx  
           http://www.springframework.org/schema/tx/spring-tx-3.2.xsd  
           http://www.springframework.org/schema/context  
           http://www.springframework.org/schema/context/spring-context-3.2.xsd"
	default-autowire="byName" default-lazy-init="false">
	
	<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->

	<!-- 採用註釋的方式配置bean -->
	<context:annotation-config />

	<!-- 配置要掃描的包 -->
	<context:component-scan base-package="wusc.edu.demo" />

	<!-- 讀入配置屬性文件 -->
	<context:property-placeholder location="classpath:mq.properties,classpath:mail.properties" />

	<!-- proxy-target-class默認"false",更改成"ture"使用CGLib動態代理 -->
	<aop:aspectj-autoproxy proxy-target-class="true" />	
	
	<import resource="spring-mq.xml" />
	<import resource="spring-mail.xml" />
	
</beans>

ConsumerSessionAwareMessageListener.java

/** * 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 . */
package wusc.edu.demo.mqtest.listener;

import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Session;

import org.apache.activemq.command.ActiveMQTextMessage;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.core.MessageCreator;
import org.springframework.jms.listener.SessionAwareMessageListener;
import org.springframework.stereotype.Component;

import wusc.edu.demo.mqtest.biz.MailBiz;
import wusc.edu.demo.mqtest.params.MailParam;

import com.alibaba.fastjson.JSONObject;

//一個自定義的監聽類
@Component
public class ConsumerSessionAwareMessageListener implements SessionAwareMessageListener<Message> {

	private static final Log log = LogFactory.getLog(ConsumerSessionAwareMessageListener.class);

	@Autowired
	private JmsTemplate activeMqJmsTemplate;
	@Autowired
	private Destination sessionAwareQueue;
	@Autowired
	private MailBiz bailBiz;
//經過onMessage方法不停的監聽
	public synchronized void onMessage(Message message, Session session) {
		try {
			ActiveMQTextMessage msg = (ActiveMQTextMessage) message;
			final String ms = msg.getText();
			log.info("==>receive message:" + ms);
			//將接收到的json對象轉換成MailParam的類型
			MailParam mailParam = JSONObject.parseObject(ms, MailParam.class);// 轉換成相應的對象
			if (mailParam == null) {
				return;
			}

			try {
			//調用發郵件
				bailBiz.mailSend(mailParam);
			} catch (Exception e) {
				// 發送異常,從新放回隊列
// activeMqJmsTemplate.send(sessionAwareQueue, new MessageCreator() {
// public Message createMessage(Session session) throws JMSException {
// return session.createTextMessage(ms);
// }
// });
				log.error("==>MailException:", e);
			}
		} catch (Exception e) {
			log.error("==>", e);
		}
	}
}

MailBiz.java

/** * 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 . */
package wusc.edu.demo.mqtest.biz;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.mail.MailException;
import org.springframework.mail.SimpleMailMessage;
import org.springframework.mail.javamail.JavaMailSender;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
import org.springframework.stereotype.Component;

import wusc.edu.demo.mqtest.params.MailParam;


@Component("mailBiz")
public class MailBiz {

	@Autowired
	private JavaMailSender mailSender;// spring配置中定義
	@Autowired
	private SimpleMailMessage simpleMailMessage;// spring配置中定義
	@Autowired
	private ThreadPoolTaskExecutor threadPool;

	/** * 發送模板郵件 * * @param mailParamTemp須要設置四個參數 * templateName,toMail,subject,mapModel * @throws Exception * */
	public void mailSend(final MailParam mailParam) {
		threadPool.execute(new Runnable() {
			public void run() {
				try {
					simpleMailMessage.setFrom(simpleMailMessage.getFrom()); // 發送人,從配置文件中取得
					simpleMailMessage.setTo(mailParam.getTo()); // 接收人
					simpleMailMessage.setSubject(mailParam.getSubject());
					simpleMailMessage.setText(mailParam.getContent());
					mailSender.send(simpleMailMessage);
				} catch (MailException e) {
					throw e;
				}
			}
		});
	}
}

MailParam.java

/** * 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 . */
package wusc.edu.demo.mqtest.params;

public class MailParam {

	/** 發件人 **/
	private String from;
	/** 收件人 **/
	private String to;
	/** 主題 **/
	private String subject;
	/** 郵件內容 **/
	private String content;

	public MailParam() {
	}

	public MailParam(String to, String subject, String content) {
		this.to = to;
		this.subject = subject;
		this.content = content;
	}

	public String getFrom() {
		return from;
	}

	public void setFrom(String from) {
		this.from = from;
	}

	public String getTo() {
		return to;
	}

	public void setTo(String to) {
		this.to = to;
	}

	public String getSubject() {
		return subject;
	}

	public void setSubject(String subject) {
		this.subject = subject;
	}

	public String getContent() {
		return content;
	}

	public void setContent(String content) {
		this.content = content;
	}
}

MQConsumer.java

/** * 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 . */
package wusc.edu.demo.mqtest;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class MQConsumer {
	private static final Log log = LogFactory.getLog(MQConsumer.class);

	public static void main(String[] args) {
		try {
			ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring-context.xml");
			context.start();
		} catch (Exception e) {
			log.error("==>MQ context start error:", e);
			System.exit(0);
		}
	}
}

測試

啓動生產者
MqProducerTest.java
在這裏插入圖片描述
name:隊列的名字
Number Of Peding:等待發送的消息
Number Of Consumer:消費者數量
Messages Enqueuqd:進入隊列的消息
Messages Dequeued :出了隊列的消息
Views:
Operations:

啓動消費者
MQConsumer.java
在這裏插入圖片描述
有一個消費者,有一個消息進入隊列,一個消息出了隊列
而且能消息到一個消息。

經過上面的例子,能夠看到,消費者和消費者之間沒有直接的調用,是經過將消息發到隊列中,具體消費者怎麼實現跟消費者沒有關係,從而實現異步和解耦。

Redis的安裝與使用(單節點)

安裝(單節點)

IP:192.168.4.111
環境:CentOS 6.6
Redis 版本:redis-3.0 (考慮到 Redis3.0 在集羣和性能提高方面的特性,rc 版爲正式版的候選版,並且 很快就出正式版)

安裝目錄:/usr/local/redis
用戶:root

編譯和安裝所需的包:

# yum install gcc tcl

下載 3.0 版 Redis(當前最新版 redis-3.0.0-rc5.tar.gz,請學員們在安裝時自行選用最新版)

# cd /usr/local/src
# wget https://github.com/antirez/redis/archive/3.0.0-rc5.tar.gz

建立安裝目錄:

# mkdir /usr/local/redis

解壓:

# tar -zxvf 3.0.0-rc5.tar.gz # mv redis-3.0.0-rc5 redis3.0 # cd redis3.0

安裝(使用 PREFIX 指定安裝目錄):

# make PREFIX=/usr/local/redis install

安裝完成後,能夠看到/usr/local/redis 目錄下有一個 bin 目錄,bin 目錄裏就是 redis 的命令腳本:

redis-benchmark    redis-check-aof     redis-check-dump    redis-cli    redis-server

將 Redis 配置成服務:
按上面的操做步驟,Redis 的啓動腳本爲:/usr/local/src/redis3.0/utils/redis_init_script 將啓動腳本複製到/etc/rc.d/init.d/目錄下,並命名爲 redis:

# cp /usr/local/src/redis3.0/utils/redis_init_script /etc/rc.d/init.d/redis

編輯/etc/rc.d/init.d/redis,修改相應配置,使之能註冊成爲服務:

# vi /etc/rc.d/init.d/redis
#!/bin/sh
#
# Simple Redis init.d script conceived to work on Linux systems
# as it does use of the /proc filesystem.
REDISPORT=6379
EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli
進程id的一個文件
PIDFILE=/var/run/redis_${REDISPORT}.pid
CONF="/etc/redis/${REDISPORT}.conf"
case "$1" in
    start)
        if [ -f $PIDFILE ]
        then
                echo "$PIDFILE exists, process is already running or crashed"
   else
		echo 」Starting Redis server..."
		 $EXEC $CONF
	fi
	;; 
stop)
    if [ ! -f $PIDFILE ]
    then
    	echo "$PIDFILE does not exist, process is not running"
	else
		PID=$(cat $PIDFILE)
		echo "Stopping ..."
		$CLIEXEC -p $REDISPORT shutdown
		while [ -x /proc/${PID} ]
		do
		    echo "Waiting for Redis to shutdown ..."
		sleep 1 done
		echo "Redis stopped"
	fi
	;; 
*)
	 echo "Please use start or stop as first argument"
	;; 
esac

查看以上 redis 服務腳本,關注標爲橙色的幾個屬性,作以下幾個修改的準備:
(1) 在腳本的第一行後面添加一行內容以下:

#chkconfig: 2345 80 90

(若是不添加上面的內容,在註冊服務時會提示:service redis does not support chkconfig)
(2) REDISPORT 端口保持 6379 不變;(注意,端口名將與下面的配置文件名有關)
(3) EXEC=/usr/local/bin/redis-server 改成 EXEC=/usr/local/redis/bin/redis-server 執行路徑,修改成本身的路徑
(4) CLIEXEC=/usr/local/bin/redis-cli 改成 CLIEXEC=/usr/local/redis/bin/redis-cli client執行路徑,修改成本身的路徑
(5) 配置文件設置:
建立 redis 配置文件目錄

# mkdir /usr/local/redis/conf

複製 redis 配置文件/usr/local/src/redis3.0/redis.conf/usr/local/redis/conf 目錄並按端口 號重命名爲 6379.conf,考慮到之後可能作集羣,因此這樣命名。

# cp /usr/local/src/redis3.0/redis.conf /usr/local/redis/conf/6379.conf

作了以上準備後,再對 CONF 屬性做以下調整:
CONF="/etc/redis/${REDISPORT}.conf" 改成 CONF="/usr/local/redis/conf/${REDISPORT}.conf"
修改成本身的路徑,REDISPORT在這個命令的上面進行了配置
(6) 更改 redis 開啓的命令,之後臺運行的方式執行: $EXEC $CONF & #「&」做用是將服務轉到後面運行,因此上面必定要配置正確。
修改後的/etc/rc.d/init.d/redis 服務腳本內容爲:

#!/bin/sh
#chkconfig: 2345 80 90
#
# Simple Redis init.d script conceived to work on Linux systems # as it does use of the /proc filesystem.
REDISPORT=6379
EXEC=/usr/local/redis/bin/redis-server
CLIEXEC=/usr/local/redis/bin/redis-cli
PIDFILE=/var/run/redis_${REDISPORT}.pid
CONF="/usr/local/redis/conf/${REDISPORT}.conf"
case "$1" in
    start)
        if [ -f $PIDFILE ]
        then
   else
fi
;; stop)
echo "$PIDFILE exists, process is already running or crashed"
echo "Starting Redis server..."
$EXEC $CONF &
if [ ! -f $PIDFILE ]
then
else
echo "$PIDFILE does not exist, process is not running"
PID=$(cat $PIDFILE)
echo "Stopping ..."
$CLIEXEC -p $REDISPORT shutdown
while [ -x /proc/${PID} ]
do
    echo "Waiting for Redis to shutdown ..."
    sleep 1 done
echo "Redis stopped"
   fi
;; *)
        echo "Please use start or stop as first argument"
;; esac

以上配置操做完成後,即可將 Redis 註冊成爲服務:

# chkconfig --add redis

防火牆中打開對應的端口

# vi /etc/sysconfig/iptables

添加:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 6379 -j ACCEPT

重啓防火牆:

# service iptables restart

修改 redis 配置文件設置:

# vi /usr/local/redis/conf/6379.conf

修改以下配置

//若是這裏不設置爲yes,pid文件不會生生成,pid不生成的話start命令就用不了
//PIDFILE=/var/run/redis_${REDISPORT}.pid這裏用到了pid文件,下面的$PIDFILE也就沒法生效
daemonize no 改成> daemonize yes
//由於在剛纔的腳本中是經過端口命名pid的,因此這裏也應該修改一下
pidfile /var/run/redis.pid 改成> pidfile /var/run/redis_6379.pid

啓動 Redis 服務

# service redis start

將 Redis 添加到環境變量中: # vi /etc/profile 在最後添加如下內容:

## Redis env
export PATH=$PATH:/usr/local/redis/bin

使配置生效:

# source /etc/profile

如今就能夠再任何路徑使用bin裏面的命令了,
即如今就能夠直接使用 redis-cli 等 redis 命令了:
在這裏插入圖片描述
運行了上面的命令,就能寫入redis增刪改查等命令了。
簡單測試下命令

set name xiaoming
get name
xiaoming

關閉 Redis 服務

# service redis stop

默認狀況下,Redis 開啓安全認證,能夠經過/usr/local/redis/conf/6379.conf 的 requirepass 指定一個
驗證密碼。
Redis 的使用的 Demo 樣例講解與演示:

使用

目錄結構
在這裏插入圖片描述
spring-context.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop"
	xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/beans  
           http://www.springframework.org/schema/beans/spring-beans-3.2.xsd  
           http://www.springframework.org/schema/aop   
           http://www.springframework.org/schema/aop/spring-aop-3.2.xsd  
           http://www.springframework.org/schema/tx  
           http://www.springframework.org/schema/tx/spring-tx-3.2.xsd  
           http://www.springframework.org/schema/context  
           http://www.springframework.org/schema/context/spring-context-3.2.xsd"
	default-autowire="byName" default-lazy-init="false">
	
	<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->

	<!-- 採用註釋的方式配置bean -->
	<context:annotation-config />

	<!-- 配置要掃描的包 -->
	<context:component-scan base-package="wusc.edu.demo" />

	<!-- proxy-target-class默認"false",更改成"ture"使用CGLib動態代理 -->
	<aop:aspectj-autoproxy proxy-target-class="true" />	
	
	<import resource="spring-redis.xml" />
</beans>

spring-redis.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
	xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

	<!-- Jedis連接池配置 -->
	
	<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
		<property name="testWhileIdle" value="true" />
		<property name="minEvictableIdleTimeMillis" value="60000" />
		<property name="timeBetweenEvictionRunsMillis" value="30000" />
		<property name="numTestsPerEvictionRun" value="-1" />
		<property name="maxTotal" value="8" />
		<property name="maxIdle" value="8" />
		<property name="minIdle" value="0" />
	</bean>
//重點配置這裏
	<bean id="shardedJedisPool" class="redis.clients.jedis.ShardedJedisPool">
		<constructor-arg index="0" ref="jedisPoolConfig" />
		<constructor-arg index="1">
			<list>
				<bean class="redis.clients.jedis.JedisShardInfo">
					<constructor-arg index="0" value="192.168.4.111" />
					<constructor-arg index="1" value="6379" type="int" />
				</bean>
			</list>
		</constructor-arg>
	</bean>
</beans>

RedisTest.java (沒有集成spring的測試類)

/** 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 **/
package wusc.edu.demo.redis;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import redis.clients.jedis.Jedis;

/** * * @描述: Redis測試 . * @做者: WuShuicheng . * @建立時間: 2015-3-23,上午1:30:40 . * @版本號: V1.0 . */
public class RedisTest {
	private static final Log log = LogFactory.getLog(RedisTest.class);

	public static void main(String[] args) {
		
		Jedis jedis = new Jedis("192.168.4.111");
		
		String key = "wusc";
		String value = "";
		
		jedis.del(key); // 刪數據
		
		jedis.set(key, "WuShuicheng"); // 存數據
		value = jedis.get(key); // 取數據
		log.info(key + "=" + value);
		
		jedis.set(key, "WuShuicheng2"); // 存數據
		value = jedis.get(key); // 取數據
		log.info(key + "=" + value);
		
		//jedis.del(key); // 刪數據
		//value = jedis.get(key); // 取數據
		//log.info(key + "=" + value);
	}
}

RedisPringTest.java(集成了spring的測試類)

/** 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 **/
package wusc.edu.demo.redis;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import redis.clients.jedis.ShardedJedis;
import redis.clients.jedis.ShardedJedisPool;

/** * * @描述: Redis測試 . * @做者: WuShuicheng . * @建立時間: 2015-3-23,上午1:30:40 . * @版本號: V1.0 . */
public class RedisSpringTest {
	private static final Log log = LogFactory.getLog(RedisSpringTest.class);

	public static void main(String[] args) {
		try {
			ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring-context.xml");
			context.start();
			
			ShardedJedisPool pool = (ShardedJedisPool) context.getBean("shardedJedisPool");
			ShardedJedis jedis = pool.getResource();
			
			String key = "wusc";
			String value = "";
			
			jedis.del(key); // 刪數據
			
			jedis.set(key, "WuShuicheng"); // 存數據
			value = jedis.get(key); // 取數據
			log.info(key + "=" + value);
			
			jedis.set(key, "WuShuicheng2"); // 存數據
			value = jedis.get(key); // 取數據
			log.info(key + "=" + value);
			
			jedis.del(key); // 刪數據
			value = jedis.get(key); // 取數據
			log.info(key + "=" + value);

			context.stop();
		} catch (Exception e) {
			log.error("==>RedisSpringTest context start error:", e);
			System.exit(0);
		} finally {
			log.info("===>System.exit");
			System.exit(0);
		}
	}
}

經過執行上述兩個類中的某個,自行回到linux redis目錄經過目錄查看結果。

FastDFS分佈式文件系統的安裝與使用(單節點)

FastDFS介紹

  1. FastDFS簡介
    FastDFS是一個輕量級的開源分佈式文件系統
    FastDFS主要解決了大容量的文件存儲和高併發訪問的問題,文件存取時實現了負載均衡
    FastDFS實現了軟件方式的RAID,可使用廉價的IDE硬盤進行存儲
    支持存儲服務器在線擴容
    支持相同內容的文件只保存一份,節約磁盤空間
    FastDFS只能經過Client API訪問,不支持POSIX訪問方式
    FastDFS特別適合大中型網站使用,用來存儲資源文件(如:圖片、文檔、音頻、視頻等等)
  2. 系統架構-架構圖
    在這裏插入圖片描述
    主要由Tracker跟蹤器、Storage存儲組成
    Tracker是能夠作集羣的,Storage中集羣有多個組,組與組之間是橫向擴容的,主機之間的文件是同樣的,是備份的,達到文件的備份高可用的做用,安全角度考慮的,
  3. 系統架構-上傳文件流程圖
    在這裏插入圖片描述
    1 client詢問tracker上傳到的storage,不須要附加參數;
    2 tracker返回一臺可用的storage;
    3 client直接和storage通信完成文件上傳
  4. 系統架構-下載文件流程圖

在這裏插入圖片描述

1client詢問tracker下載文件的storage,參數爲文件標識(組名和文件名);
2 tracker返回一臺可用的storage;
3 client直接和storage通信完成文件下載。

  1. 相關術語
    Tracker Server:跟蹤服務器,主要作調度工做,在訪問上起負載均衡的做用。記錄storage server的狀態,是鏈接Client和Storage server的樞紐。
    Storage Server:存儲服務器,文件和meta data都保存到存儲服務器上
    group:組,也可稱爲卷。同組內服務器上的文件是徹底相同的
    文件標識:包括兩部分:組名和文件名(包含路徑)
    meta data:文件相關屬性,鍵值對(Key Value Pair)方式,如:width=1024,heigth=768
  2. 同步機制
    同一組內的storage server之間是對等的,文件上傳、刪除等操做能夠在任意一臺storage server上進行;
    文件同步只在同組內的storage server之間進行,採用push方式,即源服務器同步給目標服務器;
    源頭數據才須要同步,備份數據不須要再次同步,不然就構成環路了;
    上述第二條規則有個例外,就是新增長一臺storage server時,由已有的一臺storage server將已有的全部數據(包括源頭數據和備份數據)同步給該新增服務器。
  3. 通訊協議
    協議包由兩部分組成:header和body
    header共10字節,格式以下:
    8 bytes body length
    1 byte command
    1 byte status
    body數據包格式由取決於具體的命令,body能夠爲空
  4. 運行時目錄結構-tracker server
${base_path}    
|__data    
|     |__storage_groups.dat:存儲分組信息    
|     |__storage_servers.dat:存儲服務器列表    
|__logs          
	|__trackerd.log:tracker server日誌文件
  1. 運行時目錄結構-storage server
${base_path}    
|__data    
|     |__.data_init_flag:當前storage server初始化信息    
|     |__storage_stat.dat:當前storage server統計信息    
|     |__sync:存放數據同步相關文件    
|     |     |__binlog.index:當前的binlog文件索引號    
|     |     |__binlog.###:存放更新操做記錄(日誌)    
|     |     |__${ip_addr}_${port}.mark:存放同步的完成狀況    
|     |    |     |__一級目錄:256個存放數據文件的目錄,如:00, 1F    
|           |__二級目錄:256個存放數據文件的目錄    
|__logs          
	|__storaged.log:storage server日誌文件
  1. 安裝和運行
#step 1. download FastDFS source package and unpack it,
# if you use HTTP to download file, please download libevent 1.4.x and install it
tar xzf FastDFS_v1.x.tar.gz
#for example:
tar xzf FastDFS_v1.20.tar.gz

#step 2. enter the FastDFS dir
cd FastDFS

#step 3. if HTTP supported, modify make.sh, uncomment the line:
# WITH_HTTPD=1, then execute:
./make.sh

#step 4. make install
./make.sh install

#step 5. edit/modify the config file of tracker and storage

#step 6. run server programs
#start the tracker server:
/usr/local/bin/fdfs_trackerd <tracker_conf_filename>

#start the storage server:
/usr/local/bin/fdfs_storaged <storage_conf_filename>
  1. FastDFS和集中存儲方式對比
指標 FastDFS NFS 集中存儲設備如NetApp、NAS
線性擴容性
文件高併發訪問性能 通常
文件訪問方式 專有API POSIX 支持POSIX
硬件成本 較低 中等
相同內容文件只保存一份 支持 不支持 不支持
  1. FastDFS和mogileFS對比
指標 FastDFS mogileFS 指標
系統簡潔性 簡潔 系統簡潔性 簡潔
只有兩個角色:tracker和storage 通常 只有兩個角色:tracker和storage 通常
有三個角色:tracker、storage和存儲文件信息的mysql db 有三個角色:tracker、storage和存儲文件信息的mysql db 有三個角色:tracker、storage和存儲文件信息的mysql db 有三個角色:tracker、storage和存儲文件信息的mysql db
系統性能 很高(沒有使用數據庫,文件同步直接點對點,不通過tracker中轉) 高(使用mysql來存儲文件索引等信息,文件同步經過tracker調度和中轉) 系統性能
系統穩定性 高(C語言開發,能夠支持高併發和高負載) 通常(Perl語言開發,高併發和高負載支持通常) 系統穩定性
RAID方式 分組(組內冗餘),靈活性較大 動態冗餘,靈活性通常 RAID方式
通訊協議 專有協議 通訊協議 專有協議
下載文件支持HTTP HTTP 下載文件支持HTTP HTTP
技術文檔 較詳細 較少 技術文檔
文件附加屬性(meta data) 支持 不支持 文件附加屬性(meta data)
相同內容文件只保存一份 支持 不支持 相同內容文件只保存一份
下載文件時支持文件偏移量 支持 不支持 下載文件時支持文件偏移量
  1. 參考網站
    FastDFS中文:http://www.csource.org/
    FastDFS英文:http://code.google.com/p/fastdfs/

安裝(單節點)

跟蹤服務器:192.168.4.121 (edu-dfs-tracker-01) 存儲服務器:192.168.4.125 (edu-dfs-storage-01) 環境:CentOS 6.6
用戶:root
數據目錄:/fastdfs (注:數據目錄按你的數據盤掛載路徑而定)未來文件都會上傳到這裏來
安裝包:
FastDFS v5.05
libfastcommon-master.zip(是從 FastDFS 和 FastDHT 中提取出來的公共 C 函數庫) fastdfs-nginx-module_v1.16.tar.gz
nginx-1.6.2.tar.gz fastdfs_client_java._v1.25.tar.gz
源碼地址:https://github.com/happyfish100/
下載地址:http://sourceforge.net/projects/fastdfs/files/
官方論壇:http://bbs.chinaunix.net/forum-240-1.html

1、 全部跟蹤服務器和存儲服務器均執行以下操做
一、編譯和安裝所需的依賴包:

# yum install make cmake gcc gcc-c++

二、安裝 libfastcommon:
(1)上傳或下載 libfastcommon-master.zip 到/usr/local/src 目錄
(2)解壓

# cd /usr/local/src/
  # unzip libfastcommon-master.zip
  # cd libfastcommon-master

在這裏插入圖片描述
(3) 編譯、安裝

# ./make.sh
# ./make.sh install 
libfastcommon 默認安裝到了 
/usr/lib64/libfastcommon.so 
/usr/lib64/libfdfsclient.so

(4)由於 FastDFS 主程序設置的 lib 目錄是/usr/local/lib,因此須要建立軟連接.

# ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
# ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so
# ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so 
# ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so

三、安裝 FastDFS
(1)上傳或下載 FastDFS 源碼包(FastDFS_v5.05.tar.gz)到 /usr/local/src 目錄
(2)解壓

# cd /usr/local/src/
     # tar -zxvf FastDFS_v5.05.tar.gz
     # cd FastDFS

在這裏插入圖片描述
(3)編譯、安裝(編譯前要確保已經成功安裝了 libfastcommon)

# ./make.sh
# ./make.sh install

採用默認安裝的方式安裝,安裝後的相應文件與目錄:
A、服務腳本在:

/etc/init.d/fdfs_storaged
/etc/init.d/fdfs_tracker

B、配置文件在(樣例配置文件):

/etc/fdfs/client.conf.sample 
/etc/fdfs/storage.conf.sample 
/etc/fdfs/tracker.conf.sample

C、命令工具在/usr/bin/目錄下的:

fdfs_appender_test
         fdfs_appender_test1
         fdfs_append_file
         fdfs_crc32
         fdfs_delete_file
         fdfs_download_file
         fdfs_file_info
         fdfs_monitor
         fdfs_storaged
         fdfs_test
         fdfs_test1
         fdfs_trackerd
         fdfs_upload_appender
         fdfs_upload_file
         stop.sh
		restart.sh

(4)由於 FastDFS 服務腳本設置的 bin 目錄是/usr/local/bin,但實際命令安裝在/usr/bin,能夠進入 /user/bin 目錄使用如下命令查看 fdfs 的相關命令:

# cd /usr/bin/
# ls | grep fdfs

在這裏插入圖片描述
所以須要修改 FastDFS 服務腳本中相應的命令路徑,也就是把/etc/init.d/fdfs_storaged 和/etc/init.d/fdfs_tracker 兩個腳本中的/usr/local/bin 修改爲/usr/bin:

# vi fdfs_trackerd

使用查找替換命令進統一修改:

%s+/usr/local/bin+/usr/bin
# vi fdfs_storaged

使用查找替換命令進統一修改:

%s+/usr/local/bin+/usr/bin

2、配置 FastDFS 跟蹤器(192.168.4.121)
一、 複製 FastDFS 跟蹤器樣例配置文件,並重命名:

# cd /etc/fdfs/

在這裏插入圖片描述

# cp tracker.conf.sample tracker.conf

二、 編輯跟蹤器配置文件:

# vi /etc/fdfs/tracker.conf

修改的內容以下:

disabled=false
port=22122
base_path=/fastdfs/tracker

(其它參數保留默認配置,具體配置解釋請參考官方文檔說明: http://bbs.chinaunix.net/thread-1941456-1-1.html )
三、 建立基礎數據目錄(參考基礎目錄 base_path 配置):

# mkdir -p /fastdfs/tracker

四、 防火牆中打開跟蹤器端口(默認爲 22122):

# vi /etc/sysconfig/iptables

添加以下端口行:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 22122 -j ACCEPT

重啓防火牆:

# service iptables restart

五、 啓動 Tracker:

# /etc/init.d/fdfs_trackerd start

(初次成功啓動,會在/fastdfs/tracker 目錄下建立 data、logs 兩個目錄) 查看 FastDFS Tracker 是否已成功啓動:

# ps -ef | grep fdfs

在這裏插入圖片描述
六、 關閉 Tracker:

# /etc/init.d/fdfs_trackerd stop

七、 設置 FastDFS 跟蹤器開機啓動:

# vi /etc/rc.d/rc.local

添加如下內容:

## FastDFS Tracker /etc/init.d/fdfs_trackerd start

3、配置 FastDFS 存儲(192.168.4.125)
一、 複製 FastDFS 存儲器樣例配置文件,並重命名: # cd /etc/fdfs/
在這裏插入圖片描述

# cp storage.conf.sample storage.conf

二、 編輯存儲器樣例配置文件:

# vi /etc/fdfs/storage.conf

修改的內容以下:

disabled=false
port=23000
base_path=/fastdfs/storage
store_path0=/fastdfs/storage 
tracker_server=192.168.4.121:22122
http.server_port=8888

(其它參數保留默認配置,具體配置解釋請參考官方文檔說明:
http://bbs.chinaunix.net/thread-1941456-1-1.html )
三、 建立基礎數據目錄(參考基礎目錄 base_path 配置): # mkdir -p /fastdfs/storage
四、 防火牆中打開存儲器端口(默認爲 23000):

# vi /etc/sysconfig/iptables

添加以下端口行:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 23000 -j ACCEPT

重啓防火牆:

# service iptables restart

五、 啓動 Storage:

# /etc/init.d/fdfs_storaged start

(初次成功啓動,會在/fastdfs/storage 目錄下建立 data、logs 兩個目錄) 查看 FastDFS Storage 是否已成功啓動

# ps -ef | grep fdfs

在這裏插入圖片描述
六、 關閉 Storage:

# /etc/init.d/fdfs_storaged stop

七、 設置 FastDFS 存儲器開機啓動:

# vi /etc/rc.d/rc.local

添加:

## FastDFS Storage 
/etc/init.d/fdfs_storaged start

4、文件上傳測試(192.168.4.121)
一、修改 Tracker 服務器中的客戶端配置文件:

# cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf 
# vi /etc/fdfs/client.conf
  base_path=/fastdfs/tracker
  tracker_server=192.168.4.121:22122

二、執行以下文件上傳命令:

# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /usr/local/src/FastDFS_v5.05.tar.gz

返回 ID 號:group1/M00/00/00/wKgEfVUYNYeAb7XFAAVFOL7FJU4.tar.gz
(能返回以上文件 ID,說明文件上傳成功)
6、在每一個存儲節點上安裝 nginx
一、fastdfs-nginx-module 做用說明
FastDFS 經過 Tracker 服務器,將文件放在 Storage 服務器存儲,可是同組存儲服務器之間須要進入
文件複製,有同步延遲的問題。假設 Tracker 服務器將文件上傳到了 192.168.4.125,上傳成功後文件 ID 已經返回給客戶端。此時 FastDFS 存儲集羣機制會將這個文件同步到同組存儲 192.168.4.126,在文件還 沒有複製完成的狀況下,客戶端若是用這個文件 ID 在 192.168.4.126 上取文件,就會出現文件沒法訪問的 錯誤。而 fastdfs-nginx-module 能夠重定向文件鏈接到源服務器取文件,避免客戶端因爲複製延遲致使的 文件沒法訪問錯誤。(解壓後的 fastdfs-nginx-module 在 nginx 安裝時使用)

注意這個模塊只須要在存儲節點安裝。

二、上傳 fastdfs-nginx-module_v1.16.tar.gz 到/usr/local/src
三、解壓

# cd /usr/local/src/
# tar -zxvf fastdfs-nginx-module_v1.16.tar.gz

四、修改 fastdfs-nginx-module 的 config 配置文件

# cd fastdfs-nginx-module/src
# vi config
CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/" 修改成:
CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"
//由於安裝common的時候不是再local下面

(注意:這個路徑修改是很重要的,否則在 nginx 編譯的時候會報錯的)

五、上傳當前的穩定版本 Nginx(nginx-1.6.2.tar.gz)到/usr/local/src 目錄
六、安裝編譯 Nginx 所需的依賴包

# yum install gcc gcc-c++ make automake autoconf libtool pcre* zlib openssl openssl-devel

七、編譯安裝 Nginx(添加 fastdfs-nginx-module 模塊)

# cd /usr/local/src/
# tar -zxvf nginx-1.6.2.tar.gz
# cd nginx-1.6.2
# ./configure --add-module=/usr/local/src/fastdfs-nginx-module/src
# make && make install

八、複製 fastdfs-nginx-module 源碼中的配置文件到/etc/fdfs 目錄,並修改

# cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/ 
# vi /etc/fdfs/mod_fastdfs.conf

修改如下配置:

主要是配置tracker_server=192.168.4.121:22122
url_have_group_name = true
store_path0=/fastdfs/storage

connect_timeout=10
     base_path=/tmp
     tracker_server=192.168.4.121:22122
     storage_server_port=23000
     group_name=group1
     url_have_group_name = true
     store_path0=/fastdfs/storage

九、複製 FastDFS 的部分配置文件到/etc/fdfs 目錄

# cd /usr/local/src/FastDFS/conf
# cp http.conf mime.types /etc/fdfs/

十、在/fastdfs/storage 文件存儲目錄下建立軟鏈接,將其連接到實際存放數據的目錄

# ln -s /fastdfs/storage/data/ /fastdfs/storage/data/M00

十一、配置 Nginx
簡潔版 nginx 配置樣例:
主要關注:
user root;
listen 8888;
location ~/group([0-9])/M00 {
#alias /fastdfs/storage/data;
ngx_fastdfs_module;
}

listen 8888; 將80修改成8888,爲了和上面存儲中的httpserver 8888對應
~/group([0-9])/M00,當組號設置爲true,這裏就有了,m00就是剛纔設置的軟鏈接
ngx_fastdfs_module;把這個模塊引進來

user root;
 worker_processes 1; events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       8888;
        server_name  localhost;
        location ~/group([0-9])/M00 {
            #alias /fastdfs/storage/data;
            ngx_fastdfs_module;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
root html; }
} }

注意、說明:
A、8888 端口值是要與/etc/fdfs/storage.conf 中的 http.server_port=8888 相對應,
由於 http.server_port 默認爲 8888,若是想改爲 80,則要對應修改過來。
B、Storage 對應有多個 group 的狀況下,訪問路徑帶 group 名,如/group1/M00/00/00/xxx, 對應的 Nginx 配置爲(爲了之後橫向擴展多個組作準備):

location ~/group([0-9])/M00 {
         ngx_fastdfs_module;
}

C、如查下載時如發現老報 404,將 nginx.conf 第一行 user nobody 修改成 user root 後從新啓動。

十二、防火牆中打開 Nginx 的 8888 端口

# vi /etc/sysconfig/iptables

添加:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 8888 -j ACCEPT 
# service iptables restart

1三、啓動 Nginx

# /usr/local/nginx/sbin/nginx
ngx_http_fastdfs_set pid=xxx

(重啓 Nginx 的命令爲:/usr/local/nginx/sbin/nginx -s reload)

1四、經過瀏覽器訪問測試時上傳的文件
http://192.168.4.125:8888/group1/M00/00/00/wKgEfVUYNYeAb7XFAAVFOL7FJU4.tar.gz

發現瀏覽器直接就開始下載了。

7、FastDFS 的使用的 Demo 樣例講解與演示:
具體內容請參考樣例代碼和視頻教程
注意:千萬不要使用 kill -9 命令強殺 FastDFS 進程,不然可能會致使 binlog 數據丟失。

使用

文件結構
在這裏插入圖片描述
common和fastdfs中有官方的一些java文件。
pom.xml

<!-- 基於Dubbo的分佈式系統架構視頻教程,吳水成,wu-sc@foxmail.com,學習交流QQ羣:367211134 -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>wusc.edu.demo</groupId>
	<artifactId>edu-demo-fdfs</artifactId>
	<version>1.0-SNAPSHOT</version>
	<packaging>war</packaging>

	<name>edu-demo-fdfs</name>
	<url>http://maven.apache.org</url>

	<build>
		<finalName>edu-demo-fdfs</finalName>
		<resources>
			<resource>
				<targetPath>${project.build.directory}/classes</targetPath>
				<directory>src/main/resources</directory>
				<filtering>true</filtering>
				<includes>
					<include>**/*.xml</include> <include>**/*.properties</include>
				</includes>
			</resource>
		</resources>
	</build>

	<dependencies>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.11</version>
		</dependency>
		<dependency>
			<groupId>commons-fileupload</groupId>
			<artifactId>commons-fileupload</artifactId>
			<version>1.3.1</version>
		</dependency>
		<dependency>
			<groupId>commons-io</groupId>
			<artifactId>commons-io</artifactId>
			<version>2.0.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-lang3</artifactId>
			<version>3.1</version>
		</dependency>
		<dependency>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging</artifactId>
			<version>1.1.3</version>
		</dependency>
		<dependency>
			<groupId>log4j</groupId>
			<artifactId>log4j</artifactId>
			<version>1.2.17</version>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-api</artifactId>
			<version>1.7.5</version>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-log4j12</artifactId>
			<version>1.7.5</version>
		</dependency>

	</dependencies>


</project>

fdfs_client.conf

connect_timeout = 10
network_timeout = 30
charset = UTF-8
http.tracker_http_port = 8080
http.anti_steal_token = no
http.secret_key = FastDFS1234567890

tracker_server = 192.168.4.121:22122

FastDFSClient.java

package wusc.edu.demo.fdfs;

import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;

import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger;
import org.csource.common.NameValuePair;
import org.csource.fastdfs.ClientGlobal;
import org.csource.fastdfs.StorageClient1;
import org.csource.fastdfs.StorageServer;
import org.csource.fastdfs.TrackerClient;
import org.csource.fastdfs.TrackerServer;

/** * * @描述: FastDFS分佈式文件系統操做客戶端 . * @做者: WuShuicheng . * @建立時間: 2015-3-29,下午8:13:49 . * @版本號: V1.0 . */
public class FastDFSClient {

	//private static final String CONF_FILENAME = Thread.currentThread().getContextClassLoader().getResource("").getPath() + "fdfs_client.conf";
	private static final String CONF_FILENAME = "src/main/resources/fdfs/fdfs_client.conf";
	private static StorageClient1 storageClient1 = null;

	private static Logger logger = Logger.getLogger(FastDFSClient.class);

	/** * 只加載一次. */
	static {
		try {
			logger.info("=== CONF_FILENAME:" + CONF_FILENAME);
			ClientGlobal.init(CONF_FILENAME);
			TrackerClient trackerClient = new TrackerClient(ClientGlobal.g_tracker_group);
			TrackerServer trackerServer = trackerClient.getConnection();
			if (trackerServer == null) {
				logger.error("getConnection return null");
			}
			StorageServer storageServer = trackerClient.getStoreStorage(trackerServer);
			if (storageServer == null) {
				logger.error("getStoreStorage return null");
			}
			storageClient1 = new StorageClient1(trackerServer, storageServer);
		} catch (Exception e) {
			logger.error(e);
		}
	}

	/** * * @param file * 文件 * @param fileName * 文件名 * @return 返回Null則爲失敗 */
	public static String uploadFile(File file, String fileName) {
		FileInputStream fis = null;
		try {
			NameValuePair[] meta_list = null; // new NameValuePair[0];
			fis = new FileInputStream(file);
			byte[] file_buff = null;
			if (fis != null) {
				int len = fis.available();
				file_buff = new byte[len];
				fis.read(file_buff);
			}

			String fileid = storageClient1.upload_file1(file_buff, getFileExt(fileName), meta_list);
			return fileid;
		} catch (Exception ex) {
			logger.error(ex);
			return null;
		}finally{
			if (fis != null){
				try {
					fis.close();
				} catch (IOException e) {
					logger.error(e);
				}
			}
		}
	}

	/** * 根據組名和遠程文件名來刪除一個文件 * * @param groupName * 例如 "group1" 若是不指定該值,默認爲group1 * @param fileName * 例如"M00/00/00/wKgxgk5HbLvfP86RAAAAChd9X1Y736.jpg" * @return 0爲成功,非0爲失敗,具體爲錯誤代碼 */
	public static int deleteFile(String groupName, String fileName) {
		try {
			int result = storageClient1.delete_file(groupName == null ? "group1" : groupName, fileName);
			return result;
		} catch (Exception ex) {
			logger.error(ex);
			return 0;
		}
	}

	/** * 根據fileId來刪除一個文件(咱們如今用的就是這樣的方式,上傳文件時直接將fileId保存在了數據庫中) * * @param fileId * file_id源碼中的解釋file_id the file id(including group name and filename);例如 group1/M00/00/00/ooYBAFM6MpmAHM91AAAEgdpiRC0012.xml * @return 0爲成功,非0爲失敗,具體爲錯誤代碼 */
	public static int deleteFile(String fileId) {
		try {
			int result = storageClient1.delete_file1(fileId);
			return result;
		} catch (Exception ex) {
			logger.error(ex);
			return 0;
		}
	}

	/** * 修改一個已經存在的文件 * * @param oldFileId * 原來舊文件的fileId, file_id源碼中的解釋file_id the file id(including group name and filename);例如 group1/M00/00/00/ooYBAFM6MpmAHM91AAAEgdpiRC0012.xml * @param file * 新文件 * @param filePath * 新文件路徑 * @return 返回空則爲失敗 */
	public static String modifyFile(String oldFileId, File file, String filePath) {
		String fileid = null;
		try {
			// 先上傳
			fileid = uploadFile(file, filePath);
			if (fileid == null) {
				return null;
			}
			// 再刪除
			int delResult = deleteFile(oldFileId);
			if (delResult != 0) {
				return null;
			}
		} catch (Exception ex) {
			logger.error(ex);
			return null;
		}
		return fileid;
	}

	/** * 文件下載 * * @param fileId * @return 返回一個流 */
	public static InputStream downloadFile(String fileId) {
		try {
			byte[] bytes = storageClient1.download_file1(fileId);
			InputStream inputStream = new ByteArrayInputStream(bytes);
			return inputStream;
		} catch (Exception ex) {
			logger.error(ex);
			return null;
		}
	}

	/** * 獲取文件後綴名(不帶點). * * @return 如:"jpg" or "". */
	private static String getFileExt(String fileName) {
		if (StringUtils.isBlank(fileName) || !fileName.contains(".")) {
			return "";
		} else {
			return fileName.substring(fileName.lastIndexOf(".") + 1); // 不帶最後的點
		}
	}
}

FastDFSTest.java

package wusc.edu.demo.fdfs.test;

import java.io.File;
import java.io.InputStream;

import org.apache.commons.io.FileUtils;

import wusc.edu.demo.fdfs.FastDFSClient;


/** * * @描述: FastDFS測試 . * @做者: WuShuicheng . * @建立時間: 2015-3-29,下午8:11:36 . * @版本號: V1.0 . */
public class FastDFSTest {
	
	/** * 上傳測試. * @throws Exception */
	public static void upload() throws Exception {
		String filePath = "E:/WorkSpaceSpr10.6/edu-demo-fdfs/TestFile/DubboVideo.jpg";
		File file = new File(filePath);
		String fileId = FastDFSClient.uploadFile(file, filePath);
		System.out.println("Upload local file " + filePath + " ok, fileid=" + fileId);
		// fileId: group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg
		// url: http://192.168.4.125:8888/group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg
	}
	
	/** * 下載測試. * @throws Exception */
	public static void download() throws Exception {
		String fileId = "group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg";
		InputStream inputStream = FastDFSClient.downloadFile(fileId);
		File destFile = new File("E:/WorkSpaceSpr10.6/edu-demo-fdfs/TestFile/DownloadTest.jpg");
		FileUtils.copyInputStreamToFile(inputStream, destFile);
	}

	/** * 刪除測試 * @throws Exception */
	public static void delete() throws Exception {
		String fileId = "group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg";
		int result = FastDFSClient.deleteFile(fileId);
		System.out.println(result == 0 ? "刪除成功" : "刪除失敗:" + result);
	}


	
	/** * @param args * @throws Exception */
	public static void main(String[] args) throws Exception {
		//upload();
		//download();
		delete();

	}

}

經過上面的FastDFSTest進行簡單的測試

注意:上傳成功能夠經過瀏覽器(整合了ngix的狀況下)
經過跟蹤器訪問文件系統的文件


爲何要引入文件系統?
作支付系統或者作網站也好圖片怎麼辦?必須有一個集中管理文件的地方,這樣是爲了作集羣
作集羣首先要解決兩個點,一個是會話共享,一個是文件圖片的共享 ,必須把這兩個分出來才能更好的作集羣

FastDFS配置文件詳解

http://bbs.chinaunix.net/thread-1941456-1-1.html

首先是 tracker.conf
# is this config file disabled # false for enabled
# true for disabled disabled=false
# 這個配置文件是否不生效,呵呵(改爲是否生效是否是會讓人感受好點呢?) false 爲生效(否 則不生效) true 反之

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
# 是否綁定IP,
# bind_addr= 後面爲綁定的 IP 地址 (經常使用於服務器有多個 IP 但只但願一個 IP 提供服務)。如 果不填則表示全部的(通常不填就 OK),相信較熟練的 SA 都經常使用到相似功能,不少系統和應用 都有

# the tracker server port
port=22122
# 提供服務的端口,不做過多解釋了

# connect timeout in seconds
# default value is 30s
connect_timeout=30
#鏈接超時時間,針對 socket 套接字函數 connect

# network timeout in seconds
network_timeout=60
# tracker server 的網絡超時,單位爲秒。發送或接收數據時,若是在超時時間後還不能發 送或接收數據,則本次網絡通訊失敗。

# the base path to store data and log files base_path=/home/yuqing/fastdfs
# base_path 目錄地址(根目錄必須存在,子目錄會自動建立) # 附目錄說明:
tracker server 目錄及文件結構: ${base_path}
|__data
| |__storage_groups.dat:存儲分組信息
| |__storage_servers.dat:存儲服務器列表 |__logs
|__trackerd.log:tracker server 日誌文件

數據文件 storage_groups.dat 和 storage_servers.dat 中的記錄之間以換行符(\n)分隔,字段 之間以西文逗號(,)分隔。
storage_groups.dat 中的字段依次爲:
1. group_name:組名
2. storage_port:storage server 端口號


storage_servers.dat 中記錄 storage server 相關信息,字段依次爲: 
1. group_name:所屬組名
2. ip_addr:ip 地址
3. status:狀態
4. sync_src_ip_addr:向該 storage server 同步已有數據文件的源服務器
5. sync_until_timestamp:同步已有數據文件的截至時間(UNIX 時間戳)
6. stat.total_upload_count:上傳文件次數
7. stat.success_upload_count:成功上傳文件次數
8. stat.total_set_meta_count:更改 meta data 次數
9. stat.success_set_meta_count:成功更改 meta data 次數
10. stat.total_delete_count:刪除文件次數
11. stat.success_delete_count:成功刪除文件次數
12. stat.total_download_count:下載文件次數
13. stat.success_download_count:成功下載文件次數
14. stat.total_get_meta_count:獲取 meta data 次數
15. stat.success_get_meta_count:成功獲取 meta data 次數
16. stat.last_source_update:最近一次源頭更新時間(更新操做來自客戶端)
17. stat.last_sync_update:最近一次同步更新時間(更新操做來自其餘 storage server 的同
步)


# max concurrent connections this server supported
# max_connections worker threads start when this service startup
max_connections=256
# 系統提供服務時的最大鏈接數。對於 V1.x,因一個鏈接由一個線程服務,也就是工做線程 數。
# 對於 V2.x,最大鏈接數和工做線程數沒有任何關係

# work thread count, should <= max_connections
# default value is 4
# since V2.00
# V2.0 引入的這個參數,工做線程數,一般設置爲 CPU 數 work_threads=4
# the method of selecting group to upload files # 0: round robin
# 1: specify group

# 2: load balance, select the max free space group to upload file
store_lookup=2
# 上傳組(卷) 的方式 0:輪詢方式 1: 指定組 2: 平衡負載(選擇最大剩餘空間的組(卷)上傳) 
# 這裏若是在應用層指定了上傳到一個固定組,那麼這個參數被繞過

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group=group2
# 當上一個參數設定爲 1 時 (store_lookup=1,即指定組名時),必須設置本參數爲系統中存 在的一個組名。若是選擇其餘的上傳方式,這個參數就沒有效了。

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
store_server=0
# 選擇哪一個 storage server 進行上傳操做(一個文件被上傳後,這個 storage server 就至關於 這個文件的 storage server 源,會對同組的 storage server 推送這個文件達到同步效果)
# 0: 輪詢方式
# 1: 根據 ip 地址進行排序選擇第一個服務器(IP 地址最小者)
# 2: 根據優先級進行排序(上傳優先級由 storage server 來設置,參數名爲 upload_priority)

# which path(means disk or mount point) of the storage server to upload file # 0: round robin
# 2: load balance, select the max free space path to upload file store_path=0
# 選擇 storage server 中的哪一個目錄進行上傳。storage server 能夠有多個存放文件的 base path(能夠理解爲多個磁盤)。
# 0: 輪流方式,多個目錄依次存放文件
#2: 選擇剩餘空間最大的目錄存放文件(注意:剩餘磁盤空間是動態的,所以存儲到的目錄 或磁盤可能也是變化的)

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server=0
# 選擇哪一個 storage server 做爲下載服務器
# 0: 輪詢方式,能夠下載當前文件的任一 storage server
# 1: 哪一個爲源 storage server 就用哪個 (前面說過了這個 storage server 源 是怎樣產生的) 就是以前上傳到哪一個 storage server 服務器就是哪一個了

# reserved storage space for system or other applications. # if the free(available) space of any stoarge server in
# a group <= reserved_storage_space,
# no file can be uploaded to this group. # bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
reserved_storage_space = 10%
# storage server 上保留的空間,保證系統或其餘應用需求空間。能夠用絕對值或者百分比 (V4 開始支持百分比方式)。
#(指出 若是同組的服務器的硬盤大小同樣,以最小的爲準,也就是隻要同組中有一臺服務器 達到這個標準了,這個標準就生效,緣由就是由於他們進行備份)

#standard log level as syslog, case insensitive, value list: ### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# 選擇日誌級別(日誌寫在哪?看前面的說明了,有目錄介紹哦 呵呵)

#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
# 操做系統運行 FastDFS 的用戶組 (不填 就是當前用戶組,哪一個啓動進程就是哪一個)

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# 操做系統運行 FastDFS 的用戶 (不填 就是當前用戶,哪一個啓動進程就是哪一個)
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*
# 能夠鏈接到此 tracker server 的 ip 範圍(對全部類型的鏈接都有影響,包括客戶端,storage server)
# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 10
# 同步或刷新日誌信息到硬盤的時間間隔,單位爲秒
# 注意:tracker server 的日誌不是時時寫硬盤的,而是先寫內存。

# check storage server alive interval
check_active_interval = 120
# 檢測 storage server 存活的時間隔,單位爲秒。
#storageserver按期向trackerserver 發心跳,若是trackerserver在一個check_active_interval 內尚未收到 storage server 的一次心跳,那邊將認爲該 storage server 已經下線。因此本參 數值必須大於 storage server 配置的心跳時間間隔。一般配置爲 storage server 心跳時間間隔 的 2 倍或 3 倍。

# thread stack size, should > 512KB
# default value is 1MB
thread_stack_size=1MB
# 線程棧的大小。FastDFSserver端採用了線程方式。更正一下,trackerserver線程棧不該小 於 64KB,不是 512KB。
# 線程棧越大,一個線程佔用的系統資源就越多。若是要啓動更多的線程(V1.x 對應的參數 爲 max_connections,
V2.0 爲 work_threads),能夠適當下降本參數值。

# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust=true
# 這個參數控制當 storage server IP 地址改變時,集羣是否自動調整。注:只有在 storage server 進程重啓時才完成自動調整。
# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400
# V2.0 引入的參數。存儲服務器之間同步文件的最大延遲時間,缺省爲 1 天。根據實際狀況 進行調整
# 注:本參數並不影響文件同步過程。本參數僅在下載文件時,判斷文件是否已經被同步完 成的一個閥值(經驗值)

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300
# V2.0 引入的參數。存儲服務器同步一個文件須要消耗的最大時間,缺省爲 300s,即 5 分 鍾。
# 注:本參數並不影響文件同步過程。本參數僅在下載文件時,做爲判斷當前文件是否被同 步完成的一個閥值(經驗值)

# if use a trunk file to store several small files # default value is false
# since V3.00
use_trunk_file = false
# V3.0 引入的參數。是否使用小文件合併存儲特性,缺省是關閉的。

# the min slot size, should <= 4KB # default value is 256 bytes
# since V3.00
slot_min_size = 256
# V3.0 引入的參數。
# trunk file 分配的最小字節數。好比文件只有 16 個字節,系統也會分配 slot_min_size 個字 節。

# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=
# default value is 16MB
# since V3.00
slot_max_size = 16MB
# V3.0 引入的參數。
# 只有文件大小<=這個參數值的文件,纔會合併存儲。若是一個文件的大小大於這個參數值, 將直接保存到一個文件中(即不採用合併存儲方式)。

# the trunk file size, should >= 4MB # default value is 64MB
# since V3.00
trunk_file_size = 64MB
# V3.0 引入的參數。
# 合併存儲的 trunk file 大小,至少 4MB,缺省值是 64MB。不建議設置得過大。

# if create trunk file advancely
# default value is false
trunk_create_file_advance = false
# 是否提早建立 trunk file。只有當這個參數爲 true,下面 3 個以 trunk_create_file_打頭的參 數纔有效。

# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
trunk_create_file_time_base = 02:00
# 提早建立 trunk file 的起始時間點(基準時間),02:00 表示第一次建立的時間點是凌晨 2
點。
# the interval of create trunk file, unit: second
# default value is 38400 (one day)
trunk_create_file_interval = 86400

# 建立 trunk file 的時間間隔,單位爲秒。若是天天只提早建立一次,則設置爲 86400
# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create
# the trunk files
# default value is 0
trunk_create_file_space_threshold = 20G
# 提早建立 trunk file 時,須要達到的空閒 trunk 大小
# 好比本參數爲 20G,而當前空閒 trunk 爲 4GB,那麼只須要建立 16GB 的 trunk file 便可。

# if check trunk space occupying when loading trunk free spaces # the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces # when startup. you should set this parameter to true when neccessary. trunk_init_check_occupying = false
#trunk 初始化時,是否檢查可用空間是否被佔用

# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10 trunk_init_reload_from_binlog = false
# 是否無條件從 trunk binlog 中加載 trunk 可用空間信息
# FastDFS 缺省是從快照文件 storage_trunk.dat 中加載 trunk 可用空間,
# 該文件的第一行記錄的是 trunk binlog 的 offset,而後從 binlog 的 offset 開始加載

# if use storage ID instead of IP address # default value is false
# since V4.00
use_storage_id = false
# 是否使用 server ID 做爲 storage server 標識

# specify storage ids filename, can use relative or absolute path # since V4.00
storage_ids_filename = storage_ids.conf
# use_storage_id 設置爲 true,才須要設置本參數
# 在文件中設置組名、server ID 和對應的 IP 地址,參見源碼目錄下的配置示例:
conf/storage_ids.conf

# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false
# 存儲從文件是否採用 symbol link(符號連接)方式
# 若是設置爲 true,一個從文件將佔用兩個文件:原始文件及指向它的符號連接。

# if rotate the error log every day # default value is false
# since V4.02
rotate_error_log = false
# 是否認期輪轉 error log,目前僅支持一天輪轉一次

# rotate error log time base, time format: Hour:Minute 
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
# error log 按期輪轉的時間點,只有當 rotate_error_log 設置爲 true 時有效

# rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# error log 按大小輪轉
# 設置爲 0 表示不按文件大小輪轉,不然當 error log 達到該大小,就會輪轉到新文件中

# 如下是關於 http 的設置了 默認編譯是不生效的 要求更改 #WITH_HTTPD=1 將 註釋#去 掉 再編譯
# 關於 http 的應用 說實話 不是很瞭解 沒有見到 相關說明 ,望 版主能夠完善一下 如下 是字面解釋了
#HTTP settings
http.disabled=false # HTTP 服務是否不生效 h
ttp.server_port=8080 # HTTP 服務端口

#use "#include" directive to include http other settiongs
##include http.conf # 若是加載 http.conf 的配置文件 去掉第一個#
哈哈 完成了一個 下面是 storage.conf

# is this config file disabled # false for enabled
# true for disabled disabled=false
#同上文了 就很少說了

# the name of the group this storage server belongs to group_name=group1
# 指定 此 storage server 所在 組(卷)

# bind an address of this host
# empty for bind all addresses of this host bind_addr=
# 同上文

# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true
# bind_addr 一般是針對 server 的。當指定 bind_addr 時,本參數纔有效。
# 本 storage server 做爲 client 鏈接其餘服務器(如 tracker server、其餘 storage server),是 否綁定 bind_addr。

# the storage server port port=23000
# storage server 服務端口

# connect timeout in seconds
# default value is 30s
connect_timeout=30
#鏈接超時時間,針對 socket 套接字函數 connect

# network timeout in seconds
network_timeout=60
# storageserver 網絡超時時間,單位爲秒。發送或接收數據時,若是在超時時間後還不能 發送或接收數據,則本次網絡通訊失敗。

# heart beat interval in seconds
heart_beat_interval=30
# 心跳間隔時間,單位爲秒 (這裏是指主動向 tracker server 發送心跳)

# disk usage report interval in seconds
stat_report_interval=60
# storage server 向 tracker server 報告磁盤剩餘空間的時間間隔,單位爲秒。

# the base path to store data and log files
base_path=/home/yuqing/fastdfs
# base_path 目錄地址,根目錄必須存在 子目錄會自動生成 (注 :這裏不是上傳的文件存放 的地址,以前是的,在某個版本後更改了)
# 目錄結構 由於 版主沒有更新到 論談上 這裏就不發了 你們能夠看一下置頂貼:

# max concurrent connections server supported
# max_connections worker threads start when this service startup max_connections=256
# 同上文

# work thread count, should <= max_connections
# default value is 4
# since V2.00
# V2.0 引入的這個參數,工做線程數,一般設置爲 CPU 數 work_threads=4

# the buff size to recv / send data # default value is 64KB
# since V2.00
buff_size = 256KB
# V2.0 引入本參數。設置隊列結點的 buffer 大小。工做隊列消耗的內存大小 = buff_size * max_connections
# 設置得大一些,系統總體性能會有所提高。
# 消耗的內存請不要超過系統物理內存大小。另外,對於 32 位系統,請注意使用到的內存 不要超過 3GB

# if read / write file directly
# if set to true, open file will add the O_DIRECT flag to avoid file caching
# by the file system. be careful to set this parameter.
# default value is false
disk_rw_direct = false
# V2.09 引入本參數。設置爲 true,表示不使用操做系統的文件內容緩衝特性。 # 若是文件數量不少,且訪問很分散,能夠考慮將本參數設置爲 true

# if disk read / write separated
## false for mixed read and write
## true for separated read and write # default value is true
# since V2.00
disk_rw_separated = true
# V2.0 引入本參數。磁盤 IO 讀寫是否分離,缺省是分離的。

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# V2.0 引入本參數。針對單個存儲路徑的讀線程數,缺省值爲 1。
# 讀寫分離時,系統中的讀線程數 = disk_reader_threads * store_path_count

# 讀寫混合時,系統中的讀寫線程數 = (disk_reader_threads + disk_writer_threads) * store_path_count
# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1
# V2.0 引入本參數。針對單個存儲路徑的寫線程數,缺省值爲 1。
# 讀寫分離時,系統中的寫線程數 = disk_writer_threads * store_path_count

# 讀寫混合時,系統中的讀寫線程數 = (disk_reader_threads + disk_writer_threads) * store_path_count
# when no entry to sync, try read binlog again after X milliseconds
# 0 for try again immediately (not need to wait)
sync_wait_msec=200
# 同步文件時,若是從 binlog 中沒有讀到要同步的文件,休眠 N 毫秒後從新讀取。0 表示不 休眠,當即再次嘗試讀取。
# 出於 CPU 消耗考慮,不建議設置爲 0。如何但願同步儘量快一些,能夠將本參數設置得 小一些,好比設置爲 10ms

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0
# 同步上一個文件後,再同步下一個文件的時間間隔,單位爲毫秒,0 表示不休眠,直接同 步下一個文件。

# sync start time of a day, time format: Hour:Minute 
# Hour from 0 to 23, Minute from 0 to 59 
sync_start_time=00:00

# sync end time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_end_time=23:59
# 上面二個一塊兒解釋。容許系統同步的時間段 (默認是全天) 。通常用於避免高峯同步產生 一些問題而設定,相信 sa 都會明白

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500
# 同步完 N 個文件後,把 storage 的 mark 文件同步到磁盤 # 注:若是 mark 文件內容沒有變化,則不會同步

# path(disk or mount point) count, default value is 1
store_path_count=1
# 存放文件時 storage server 支持多個路徑(例如磁盤)。這裏配置存放文件的基路徑數目, 一般只配一個目錄。

# store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist
store_path0=/home/yuqing/fastdfs #store_path1=/home/yuqing/fastdfs2
# 逐一配置 store_path 個路徑,索引號基於 0。注意配置方法後面有 0,1,2 ......,須要配置 0 到 store_path - 1。
# 若是不配置 base_path0,那邊它就和 base_path 對應的路徑同樣。

# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256
# FastDFS 存儲文件時,採用了兩級目錄。這裏配置存放文件的目錄個數 (系統的存儲機制, 你們看看文件存儲的目錄就知道了)
# 若是本參數只爲 N(如:256),那麼 storage server 在初次運行時,會自動建立 N * N 個 存放文件的子目錄。

# tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=10.62.164.84:22122 tracker_server=10.62.245.170:22122
# tracker_server 的列表 要寫端口的哦 (再次提醒是主動鏈接 tracker_server ) # 有多個 tracker server 時,每一個 tracker server 寫一行

#standard log level as syslog, case insensitive, value list: ### emerg for emergency
### alert
### crit for critical
### error
### warn for warning ### notice
### info
### debug log_level=info
# 日誌級別很少說

#unix group name to run this program,     
#not set (empty) means run by the group of current user run_by_group=
# 同上文了

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# 同上文了 (提醒注意權限 若是和 webserver 不搭 能夠會產生錯誤 哦)

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*
# 容許鏈接本 storage server 的 IP 地址列表 (不包括自帶 HTTP 服務的全部鏈接) # 能夠配置多行,每行都會起做用

# the mode of the files distributed to the data path # 0: round robin(default)
# 1: random, distributted by hash code file_distribute_path_mode=0
# 文件在 data 目錄下分散存儲策略。
# 0: 輪流存放,在一個目錄下存儲設置的文件數後(參數 file_distribute_rotate_count 中設置 文件數),使用下一個目錄進行存儲。

# 1: 隨機存儲,根據文件名對應的 hash code 來分散存儲。
# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100
# 當上面的參數 file_distribute_path_mode 配置爲 0(輪流存放方式)時,本參數有效。
# 當一個目錄下的文件存放的文件數達到本參數值時,後續上傳的文件存儲到下一個目錄 中。

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0
# 當寫入大文件時,每寫入 N 個字節,調用一次系統函數 fsync 將內容強行同步到硬盤。0 表示從不調用 fsync

# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval=10
# 同步或刷新日誌信息到硬盤的時間間隔,單位爲秒
# 注意:storage server 的日誌信息不是時時寫硬盤的,而是先寫內存。

# sync binlog buff / cache to disk every interval seconds # this parameter is valid when write_to_binlog set to 1 # default value is 60 seconds sync_binlog_buff_interval=60
# 同步 binglog(更新操做日誌)到硬盤的時間間隔,單位爲秒 # 本參數會影響新上傳文件同步延遲時間

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300
# 把 storage 的 stat 文件同步到磁盤的時間間隔,單位爲秒。 # 注:若是 stat 文件內容沒有變化,不會進行同步

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB
# 線程棧的大小。FastDFS server 端採用了線程方式。
# 對於 V1.x,storage server 線程棧不該小於 512KB;對於 V2.0,線程棧大於等於 128KB 即 可。
# 線程棧越大,一個線程佔用的系統資源就越多。
# 對於 V1.x,若是要啓動更多的線程(max_connections),能夠適當下降本參數值。

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10
# 本 storage server 做爲源服務器,上傳文件的優先級,能夠爲負數。值越小,優先級越高。 這裏就和 tracker.conf 中 store_server= 2 時的配置相對應了

# if check file duplicate, when set to true, use FastDHT to store file indexes # 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0
# 是否檢測上傳文件已經存在。若是已經存在,則不存在文件內容,創建一個符號連接以節 省磁盤空間。
# 這個應用要配合 FastDHT 使用,因此打開前要先安裝 FastDHT
#1或yes 是檢測,0或no 是不檢測

# file signature method for check file duplicate ## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01 file_signature_method=hash
# 文件去重時,文件內容的簽名方式: ## hash: 4 個 hash code
## md5:MD5

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS
# 當上個參數設定爲 1 或 yes 時 (true/on 也是能夠的) , 在 FastDHT 中的命名空間。

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0
# 與 FastDHT servers 的鏈接方式 (是否爲持久鏈接) ,默認是 0(短鏈接方式)。能夠考慮使 用長鏈接,這要看 FastDHT server 的鏈接數是否夠用。

# 下面是關於 FastDHT servers 的設定 須要對 FastDHT servers 有所瞭解,這裏只說字面意思 了
# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file. # must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# 能夠經過 #include filename 方式來加載 FastDHT servers 的配置,裝上 FastDHT 就知道 該如何配置啦。
# 一樣要求 check_file_duplicate=1 時纔有用,否則系統會忽略
# fdht_servers.conf 記載的是 FastDHT servers 列表

# if log to access log
# default value is false # since V4.00 use_access_log = false
# 是否將文件操做記錄到 access log

# if rotate the access log every day # default value is false
# since V4.00
rotate_access_log = false
# 是否認期輪轉 access log,目前僅支持一天輪轉一次

# rotate access log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00
# access log 按期輪轉的時間點,只有當 rotate_access_log 設置爲 true 時有效

# if rotate the error log every day # default value is false
# since V4.02
rotate_error_log = false
# 是否認期輪轉 error log,目前僅支持一天輪轉一次

# rotate error log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
# error log 按期輪轉的時間點,只有當 rotate_error_log 設置爲 true 時有效
# rotate access log when the log file exceeds this size # 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0
# access log 按文件大小輪轉
# 設置爲 0 表示不按文件大小輪轉,不然當 access log 達到該大小,就會輪轉到新文件中

# rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# error log 按文件大小輪轉
# 設置爲 0 表示不按文件大小輪轉,不然當 error log 達到該大小,就會輪轉到新文件中

# if skip the invalid record when sync file # default value is false
# since V4.02 file_sync_skip_invalid_record=false
# 文件同步的時候,是否忽略無效的 binlog 記錄

下面是 http 的配置了。若是系統較大,這個服務有可能支持不了,能夠自行換一個 webserver, 我喜歡 lighttpd,固然 ng 也很好了。具體不說明了。相應這一塊的說明你們都懂,不明白見 上文。
#HTTP settings
http.disabled=false

# the port of the web server on this storage server http.server_port=8888

http.trunk_size=256KB
# http.trunk_size 表示讀取文件內容的 buffer 大小(一次讀取的文件內容大小),也就是回覆 給 HTTP client 的塊大小。
# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=
# storage server 上 web server 域名,一般僅針對單獨部署的 web server。這樣 URL 中就能夠 經過域名方式來訪問 storage server 上的文件了,
# 這個參數爲空就是 IP 地址的方式。

#use "#include" directive to include HTTP other settiongs ##include http.conf

補充:
storage.conf 中影響 storage server 同步速度的參數有以下幾個:
# when no entry to sync, try read binlog again after X milliseconds
# 0 for try again immediately (not need to wait)
sync_wait_msec=200
# 同步文件時,若是從 binlog 中沒有讀到要同步的文件,休眠 N 毫秒後從新讀取。0 表示
不休眠,當即再次嘗試讀取。
# 不建議設置爲0,如何但願同步儘量快一些,能夠將本參數設置得小一些,好比設置爲
10ms

# after sync a file, usleep milliseconds
 # 0 for sync successively (never call usleep)
 sync_interval=0
  # 同步上一個文件後,再同步下一個文件的時間間隔,單位爲毫秒,0 表示不休眠,直接同
 步下一個文件。
 
# sync start time of a day, time format: Hour:Minute
 # Hour from 0 to 23, Minute from 0 to 59
 sync_start_time=00:00
 
 # sync end time of a day, time format: Hour:Minute
 # Hour from 0 to 23, Minute from 0 to 59
 sync_end_time=23:59
  # 上面二個一塊兒解釋。容許系統同步的時間段 (默認是全天) 。通常用於避免高峯同步產
 生一些問題而設定,相信 sa 都會明白
 
# sync binlog buff / cache to disk every interval seconds
# this parameter is valid when write_to_binlog set to 1
# default value is 60 seconds
sync_binlog_buff_interval=60
# 同步 binglog(更新操做日誌)到硬盤的時間間隔,單位爲秒
# 本參數會影響新上傳文件同步延遲時間
相關文章
相關標籤/搜索