Tag - 环境搭建

环境搭建    2019-05-06 06:51:53    37    0    0

This procedure explains how to download and install the binary distribution on a Unix system.

Download ActiveMQ

Download the activemq zipped tarball file to the Unix machine, using either a browser or a tool, i.e., wget, scp, ftp, etc. for example:

http://mirrors.cnnic.cn/apache//activemq/5.14.0/apache-activemq-5.14.0-bin.tar.gz

Extract the files from the zipped tarball into a directory of your choice. For example:

cd [activemq_install_dir]
tar zxvf apache-activemq-5.14.0-bin.tar.gz 

Starting ActiveMQ

From a command shell, change to the installation directory and run ActiveMQ as a foregroud process:

cd [activemq_install_dir]/bin
./activemq console

From a command shell, change to the installation directory and run ActiveMQ as a daemon process:

cd [activemq_install_dir]/bin
./activemq start

Testing the Installation

  1. Using the administrative interface
    Open the administrative interface
    URL: http://127.0.0.1:8161/ad

环境搭建    2019-05-06 06:51:53    33    0    0

Install the Server

Install a recent version of Erlang.

yum install erlang

Download rabbitmq-server-generic-unix-3.6.5.tar.xz from the link below.
Contained in the tarball is a directory named rabbitmq_server-3.6.5. You should extract this into somewhere appropriate for application binaries on your system. The sbin directory will be found in this directory.

 wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.5/rabbitmq-server-generic-unix-3.6.5.tar.xz
 tar Jxvf rabbitmq-server-generic-unix-3.6.5.tar.xz 
 mv rabbitmq_server-3.6.5/ /xxx/rabbitmq

Run RabbitMQ Server

Start the Server

sbin/rabbitmq-server

This displays a short banner message, concluding with the message “completed with [n] plugins.”, indicating that the RabbitMQ broker has been started successfully.

rabbitmq-server -detached

You can also start the server in “detached” mode with rabbitmq-server -detached, in which case the server process runs i

环境搭建    2019-05-06 06:51:53    38    0    0

Configuring Memory Caching

You can significantly improve your ownCloud server performance with memory caching, where frequently-requested objects are stored in memory for faster retrieval.

There are two types of caches to use: a PHP opcode cache, which is commonly called opcache, and data caching for your Web server.

A PHP opcache stores compiled PHP scripts so they don’t need to be re-compiled every time they are called. PHP bundles the Zend OPcache in core since version 5.5, so you don’t need to install an opcache for PHP 5.5+.

Data caching is supplied by the Alternative PHP Cache, user (APCu) in PHP 5.5+, Memcached, or Redis.

add the appropriate entries to your config.php, and refresh your ownCloud admin page

  1. #'memcache.local' => '\OC\Memcache\Redis',
  2. 'redis' => array(
  3. 'host' => 'localhost',
  4. 'port' => 6379,
  5. ),
环境搭建    2019-05-06 06:51:53    41    0    0

编译安装php(enable-fpm)

  1. ./configure --prefix=/root/programs/php5.6 --with-libxml-dir=/usr --with-pdo-pgsql=/usr --with-pgsql=/usr --enable-mbstring=all --enable-zip --with-zlib --enable-xml --enable-sockets --with-mcrypt --with-bz2 --with-openssl --with-gd --with-curl -enable-maintainer-zts --with-config-file-path=/root/programs/php5.6/etc --with-freetype-dir --with-jpeg-dir --with-png-dir --enable-ftp --enable-fpm --disable-cgi --with-iconv --enable-bcmath --enable-calendar --enable-exif --enable-libxml --with-xmlrpc --with-gettext --enable-pcntl --enable-sysvsem --enable-inline-optimization --enable-soap
  2. make
  3. make install
  4. ln -s /root/programs/php5.6/bin/php /usr/bin/php
  5. cp /root/sources/php-5.6.7/php.ini-production /root/programs/php5.6/etc/php.ini
  6. #下载证书
  7. wget https://curl.haxx.se/ca/cacert.pem
  8. mv cacert.pem /root/programs/php5.6/cacert.pem
  9. vi /root/programs/php5.6/etc/php.ini
  10. date.timezone = Asia/Shanghai 
  11. openssl.cafile=/root/program
环境搭建    2019-05-06 06:51:53    204    0    0


下载leanote二进制版

下载 leanote 最新二进制版

假设将文件下载到 /home/user1下, 解压文件

$> cd /home/user1
$> tar -xzvf leanote-.tar.gz

此时在/home/user1目录下有leanote目录, 可以看看里面有什么:

$> cd leanote
$> ls
app  bin  conf  messages  mongodb_backup  public

leanote暂时到这里, 下面安装数据库mongodb

安装mongodb

到 http://www.mongodb.org/downloads 去下载

下载到/home/user1下, 直接解压即可

$> cd /home/user1
$> tar -xzvf mongodb-linux-x86_64-2.6.4.tgz/

为了快速使用mongodb的命令, 可以配置环境变量,

编辑 ~/.profile或/etc/profile 将mongodb bin路径加入即可.

$> sudo vim /etc/profile
添加:
export PATH=$PATH:/home/user1/mongodb-linux-x86_64-2.6.4/bin

使环境变量生效:

$> source /etc/profile

简单使用mongodb

先在/home/user1下新建一个目录data存放mongodb数据

mkdir /home/user1/data
# 开启mongodb
mongod --dbpath /home/user1/data

这时mongod已经启动了

重新打开一个终端, 使用下mongodb

$> mongo
> show dbs
...数据库列表

mongodb安装到此为止, 下面为mongodb导入数据leanote初始数据

导入初始数据

leanote初始数据在 /h

环境搭建    2019-05-06 06:51:53    153    0    0

1.安装httpd

yum install apr apr-devel
yum install apr-util apr-util-devel
yum install pcre pcre-devel
./configure --prefix=/root/programs/httpd2.2 --enable-ssl --enable-cgi --enable-rewrite --enable-zip --with-zlib --with-pcre --with-apr=/usr --enable-mbstring --with-apr-util=/usr --enable-modules=most --enable-mpms-shared=all​--with-mpm=event
./apachectl start​​

2.安装php

#install libxml
./configure --prefix=/root/programs/libxml2
make
make install​​

yum install zlib zlib-devel
yum install openssl openssl-devel
yum install libmcrypt libmcrypt-devel
yum install mhash mhash-devel
yum install libmcrypt libmcrypt-devel
yum install libpqxx libpqxx-devel
yum install bzip2-devel
yum install libcurl libcurl-devel
yum install gd gd-devel
yum install bzip2 bzip2-devel

./configure --prefix=/root/programs/php5.6 --with-libxml-dir=/root/programs/libxml2 --with-apxs2=/root/programs/httpd2.2/bin/apxs  --with-pdo-pgsql=/usr/pgsql-9.4 --with-pgsql=/usr/pgsql-9.4 --enable-mbstring --enable-zip --with-zlib
环境搭建    2019-05-06 06:51:53    169    0    0

简介

Kafka is a distributed,partitioned,replicated commit logservice。它提供了类似于JMS的特性,但是在设计实现上完全不同,此外它并不是JMS规范的实现。kafka对消息保存时根据Topic进行归类,发送消息者成为Producer,消息接受者成为Consumer,此外kafka集群有多个kafka实例组成,每个实例(server)成为broker。无论是kafka集群,还是producer和consumer都依赖于zookeeper来保证系统可用性集群保存一些meta信息。

title

Topics/logs/consumer/Producer

一个Topic可以认为是一类消息,每个topic将被分成多个partition(区),每个partition在存储层面是append log文件。任何发布到此partition的消息都会被直接追加到log文件的尾部,每条消息在文件中的位置称为offset(偏移量),offset为一个long型数字,它是唯一标记一条消息。它唯一的标记一条消息。kafka并没有提供其他额外的索引机制来存储offset,因为在kafka中几乎不允许对消息进行“随机读写”。

title

对于consumer而言,它需要保存消费消息的offset,对于offset的保存和使用,有consumer来控制;当consumer正常消费消息时,offset将会”线性”的向前驱动,即消息将依次顺序被消费.事实上consumer可以使用任意顺序消费消息,它只需要将offset重置为任意值

kafka集群几乎不需要维护任何consumer和producer状态信息,这些信息有zookeeper保存;因此producer和consumer的客户端实现非常轻量级,它们可以随意离开,而不会对集群造成额外的影响.

partitions的设计目的有多个.最根本原因是kafka基于文件存储.通过分区,可以将日志内容分散到多个server上,来避免文件尺寸达到单机磁盘的上限,每个partiton都会被当前server(kafka实例)保存;可以将一个topic切分多任意多个partitions,来

4/4