Tag - 环境搭建

环境搭建 hadoop    2019-05-06 06:51:53    40    0    0

Config Metastore

Install the Hive metastore somewhere in your cluster, see hive installation.

As part of this process, you configure the Hive metastore to use an external database as a metastore. Impala uses this same database for its own table metadata. You can choose either a MySQL or PostgreSQL database as the metastore.

It is recommends setting up a Hive metastore service rather than connecting directly to the metastore database; this configuration is required when running Impala under CDH 4.1. Make sure the /etc/impala/conf/hive-site.xml file contains the following setting, substituting the appropriate hostname for metastore_server_host:

<property>
<name>hive.metastore.uris</name>
<value>thrift://metastore_server_host:9083</value>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>3600</value>
<description>MetaStore Client socket timeout in seconds</description>
</property>

Install Imp

环境搭建 hadoop    2019-05-06 06:51:53    908    0    0

By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers and DataNodes) allow access without any form of authentication.

The next section describes how to configure Hadoop HTTP web-consoles to require user authentication.

Configuration

The following properties should be in the core-site.xml of all the nodes in the cluster.

 <property>
      <name>hadoop.http.filter.initializers</name>
      <value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
      <description>
                    Authentication for Hadoop HTTP web-consoles
                    add to this property the org.apache.hadoop.security.AuthenticationFilterInitializer initializer class.
      </description>
    </property>
    <property>
      <name>hadoop.http.authentication.type</name>
      <value>pers.louyj.utils.hadoop.auth.ext.StandardAuthenticationHandler</value>
      <description>
                    Defines authentic
环境搭建 hadoop    2019-05-06 06:51:53    58    0    0

Install Hive

wget http://mirror.bit.edu.cn/apache/hive/hive-2.1.0/apache-hive-2.1.0-bin.tar.gz
tar zxvf apache-hive-2.1.0-bin.tar.gz
mv apache-hive-2.1.0-bin hive-2.1.0

Install Postgresql

sudo -u postgres psql

CREATE ROLE hive LOGIN PASSWORD 'hive_password';
CREATE DATABASE metastore OWNER hive ENCODING 'UTF8';
GRANT ALL PRIVILEGES ON DATABASE metastore TO hive;


cd /home/hadoop/hive-2.1.0/lib
wget http://central.maven.org/maven2/org/postgresql/postgresql/9.4.1211.jre7/postgresql-9.4.1211.jre7.jar

Configuration

cd /home/hadoop/hive-2.1.0/conf

vi hive-site.xml

<configuration>
<property>
    <name>hive.exec.scratchdir</name>
    <value>hdfs://linode01.touchworld.link:9000/hive/scratchdir</value>
</property>
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>hdfs://linode01.touchworld.link:9000/hive/warehousedir</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <val
环境搭建 hadoop    2019-05-06 06:51:53    106    0    0

Install database

install postgresql

  1. sudo yum install postgresql-server postgresql

init database

  1. sudo su - postgres
  2. initdb -D /var/lib/pgsql/data

start service

  1. systemctl status postgresql.service
  2. systemctl start postgresql.service
  3. systemctl stop postgresql.service

remote access

  1. vi /var/lib/pgsql/data/postgresql.conf
  2. listen_addresses ='*'
  3. vi /var/lib/pgsql/data/pg_hba.conf
  4. host all all 0.0.0.0/0 trust

restart service

  1. systemctl restart postgresql.service

set password

  1. su - postgres
  2. psql
  3. \password postgres
  4. xxxpgxxx

create cloudera-manager database

Connect to PostgreSQL:

  1. sudo -u postgres psql

If you are not using the Cloudera Manager installer, create a database for the Cloudera Manager Server. The database name, user name, and password can be any value. Record the names chosen because you will need them later when running the scm_prepare_database.sh script.

  1. CREATE ROLE scm LOGIN PASSWORD 'scm';
  2. CREATE DATABA
storm 环境搭建 redis    2019-05-06 06:51:53    110    1    0



 

由于单台redis服务器的计算和内存管理能力有限,使用过大内存redis服务器的性能急剧下降。为了获取更好的缓存性能及扩展型,我们需要搭建redis集群来满足需求。因redis 3.0 beta支持的集群功能不适合生产环境的使用,所以我们采用twitter的twemproxy来搭建redis缓存服务器集群.

Twemproxy是memcached和redis协议的代理服务器,并能有效减少大量连接对redis服务器的性能影响.

 

安装步骤:

1. 下载编译redis

推荐版本2.8及以上

2. 修改redis配置文件

 修改端口(可在启动时指定)

port 6379

 采用纯内存模式,注释掉save指令.

#save 900 1
#save 300 10
#save 60 10000
#save 900 1000​​

调整内存大小

maxmemory 8g

设置密码(可在启动时指定)

 

requirepass foobared

3.启动redis服务

redis-server ../conf/masters/redis.conf --logfile ../logs/masters/master-01.log --requirepass 'stream!23$' --port 6301

 

4.下载编译Twemproxy

推荐使用 0.4.1 release

5.修改twemproxy配置

修改监听端口

listen: 0.0.0.0:6401

禁用自动剔除,保持hash一致性

 

auto_eject_hosts: false

 

设置redis超时

timeout: 2000
redis: true​

设置redis密码

redis_auth: stream!23$

配置redis服务地址(集群中所有redis都要添加上)

servers:
 - 10.221.247.5:6301:1 server01
 - 10.221.247.5:6302:1 server02
 - 10.221.247.5:6303:1 server03
 - 10.221.247.5:6304:1 server04
 - 10.221.247.5:6305:1 server05
 - 10.221.247.5:6306:1 server06
 - 10.221.247.5:6307:1 server07
 - 10.221.247.5
环境搭建 数据库    2019-05-06 06:51:53    30    0    0

Install MongoDB Community Edition

Download the binary files for the desired release of MongoDB.

  1. curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.2.9.tgz

Extract the files from the downloaded archive.

  1. tar -zxvf mongodb-linux-x86_64-3.2.9.tgz

Copy the extracted archive to the target directory

  1. mkdir -p ~/mongodb-3.2.9
  2. mv mongodb-linux-x86_64-3.2.9 ~/mongodb-3.2.9

Ensure the location of the binaries is in the PATH variable.

  1. export PATH=<mongodb-install-directory>/bin:$PATH

Run MongoDB Community Edition

Create the data directory

Before you start MongoDB for the first time, create the directory to which the mongod process will write data.

By default, the mongod process uses the /data/db directory.

If you create a directory other than this one, you must specify that directory in the dbpath option when starting the mongod process later in this procedure.

  1. mkdir -p /data/mongodb

Set permission

环境搭建 redis    2019-05-06 06:51:53    109    1    0

Down and build source

  1. https://github.com/eleme/corvus/archive/0.2.5.1.zip
  2. unzip corvus-0.2.5.1
  3. https://github.com/jemalloc/jemalloc/archive/4.5.0.tar.gz
  4. tar zxvf jemalloc-4.5.0.tar.gz
  5. rm -r corvus-0.2.5.1/deps/jemalloc
  6. mv jemalloc-4.5.0 corvus-0.2.5.1/deps/jemalloc
  7. cd corvus-0.2.5.1/deps/jemalloc
  8. autoconf
  9. cd ../../
  10. make

Configuration

  1. bind 5101
  2. node 192.168.1.202:5001,192.168.1.202:5002,192.168.1.203:5001,192.168.1.203:5002,192.168.1.204:5001,192.168.1.202:5002
  3. thread 4

Start and Stop

  1. /root/upload/corvus-0.2.5.1/src/corvus /root/upload/corvus-0.2.5.1/corvus.conf > corvus.log 2>&1 &

Benchmark

  1. /root/upload/redis-3.2.8/src/redis-benchmark -p 5101 -q -t get,set -r 1000000 -n 2000000 -P 16 -c 10
环境搭建    2019-05-06 06:51:53    131    0    0
Open365 Installer ================= # Overview This is the main [Open365](https://open365.io/) installer. It installs all the required components to run Open365 in your computer. # Requirements -
环境搭建    2019-05-06 06:51:53    68    0    0
## Install JDK cd /opt/ sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.ora
环境搭建    2019-05-06 06:51:53    49    0    0
yum install libnet libpcap libnet-devel libpcap-devel epel-release git clone https://github.com/snooda/net-speeder cd net-speeder chmod +x ./build.sh bash ./build.sh ./n
2/4