Tag - 环境搭建

环境搭建    2019-05-06 06:51:53    212    0    0

Install

Debian / Ubuntu:

  1. apt-get install python-pip
  2. pip install shadowsocks

CentOS:

  1. yum install python-setuptools && easy_install pip
  2. #curl -o get-pip.py https://bootstrap.pypa.io/pip/2.7/get-pip.py && python get-pip.py
  3. pip install shadowsocks

Windows:

See [Install Server on Windows]

Usage

  1. ssserver -p 443 -k password -m aes-256-cfb

To run in the background:

  1. sudo ssserver -p 443 -k password -m aes-256-cfb --user nobody -d start
  2. ssserver --log-file /var/log/shadowsocks-12300.log --pid-file /var/run/shadowsocks-12300.pid --user nobody -p 12300 -k password -m aes-256-cfb -s 0.0.0.0 -d start

To stop:

  1. sudo ssserver -d stop

To check the log:

  1. sudo less /var/log/shadowsocks.log

Check all the options via -h. You can also use a [Configuration] file
instead.

Client

  • [Windows] / [OS X]
  • [Android] / [iOS]
  • [OpenWRT]

Use GUI clients on your local PC/phones. Check the README of your client
for more information.

Documenta

环境搭建 hadoop    2019-05-06 06:51:53    73    0    0

Install Jdk

参考Jdk安装

Configure SSH

参考SSH免密码登陆

Install Ntp

  1. yum install ntp
  2. vi /etc/ntp.conf
  3. server ntp1.aliyun.com iburst
  4. server ntp2.aliyun.com iburst
  5. server ntp3.aliyun.com iburst
  6. systemctl enable ntpd
  7. systemctl start ntpd

Install Hadoop

Download Hadoop

  1. wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.5/hadoop-2.7.5.tar.gz
  2. tar zxvf hadoop-2.7.5.tar.gz
  3. cd hadoop-2.7.5/

Configure Hadoop

Config Env

  1. vi .bashrc
  2. export JAVA_HOME=/opt/jdk1.8.0_202
  3. export HADOOP_PID_DIR=/data/hadooptemp

Configure slaves

  1. vi slaves
  2. test01
  3. test02
  4. test03

Configure core-site.xml

  1. mkdir /data/hadoop
  2. mkdir /data/hadooptemp
  1. <configuration>
  2. <property>
  3. <name>hadoop.tmp.dir</name>
  4. <value>/data/hadoop</value>
  5. <description>
  6. A base for other temporary directories.
  7. </description>
  8. </property>
  9. <property>
  10. <name>fs.defaultFS</name>
  11. <value>hdfs://cluster01</value>
  12. <descr
环境搭建    2019-05-06 06:51:53    107    0    0

1. 下载解压nginx,pcre,zlib

    wget http://nginx.org/download/nginx-1.11.2.tar.gz
    tar zxvf nginx-1.11.2.tar.gz 
    wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.37.tar.bz2
    tar jxvf pcre-8.37.tar.bz2 
    wget http://zlib.net/zlib-1.2.8.tar.gz
    tar zxvf zlib-1.2.8.tar.gz ​

2. 添加nginx水印支持

    Nginx-image-filter-watermark

    wget https://codeload.github.com/intaro/nginx-image-filter-watermark/zip/master
    unzip master 
    cd nginx-image-filter-watermark-master/
    cp ngx_http_image_filter_module.c ../nginx-1.11.2/src/http/modules/

3. 编译安装

cd nginx-1.11.2
sudo yum install openssl openssl-devel gd gd-devel
./configure --prefix=/root/programs/nginx --with-http_ssl_module --with-pcre=../pcre-8.37 --with-zlib=../zlib-1.2.8 --with-http_image_filter_module
make
make install​

4.配置文件

user root;
worker_processes 20;

http {
 include mime.types;
 default_type application/octet-stream;
 
 client_max_body_size 10m;
 
 server {
        listen 10000;
        server_name www.xx
环境搭建    2019-05-06 06:51:53    101    0    0

1.安装vmware

    版本 12.1

2.下载centos 7系统。

        http://101.44.1.4/files/50220000045FC39E/mirror.neu.edu.cn/centos/7.2.1511/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso

3. 修改网卡配置

    3.1 自动获取动态IP地址

    输入“ip addr”并按回车键确定,发现无法获取IP(CentOS 7默认没有ifconfig命令),记录下网卡名称(本例中为eno16777736)。

    输入“cd /etc/sysconfig/network-scripts/”按回车键确定,继续输入“ls”按回车键查看文件。

    输入“sudo vi ifcfg-eno16777736”并按回车键确定(网卡名称可能不同)。亦可在第二步直接输入“cd /etc/sysconfig/network-scripts/ifcfg-eno16777736”直接编辑文件。

    查看最后一项(红色框内),发现为“ONBOOT=no”

    按“i”键进入编辑状态,将最后一行“no”修改为“yes”,然后按“ESC”键退出编辑状态,并输入“:x”保存退出。

    输入“service network restart”重启服务,亦可输入“systemctl restart netwrok”。

    再次输入“ip addr”查看,现已可自动获取IP地址。

    2.设置静态IP地址

    输入“sudo vi ifcfg-eno16777736”并按回车键确定(网卡名称可能不同)。亦可在第二步直接输入“cd /etc/sysconfig/network-scripts/ifcfg-eno16777736”直接编辑文件。

    设置为“BOOTPROTO='static'”(如设置为none则禁止DHCP,static则启用静态IP地址,设置为dhcp则为开启DHCP服务).

    并修改其他部分设置, 本例中为192.168.1.200/24,GW:192.168.1.1。 注意:NM_CONTROLLED=no和ONBOOT=yes可根据需求进行设置。

BOOTPROTO="static" #dhcp改为static 
ONBOOT="y
storm 环境搭建    2019-05-06 06:51:53    129    0    0

1. Storm集群组件

Storm集群中包含两类节点:主控节点(Master Node)和工作节点(Work Node)。其分别对应的角色如下:

  •  主控节点(Master Node)上运行一个被称为Nimbus的后台程序,它负责在Storm集群内分发代码,分配任务给工作机器,并且负责监控集群运行状态。Nimbus的作用类似于Hadoop中JobTracker的角色。
  •  每个工作节点(Work Node)上运行一个被称为Supervisor的后台程序。Supervisor负责监听从Nimbus分配给它执行的任务,据此启动或停止执行任务的工作进程。每一个工作进程执行一个Topology的子集;一个运行中的Topology由分布在不同工作节点上的多个工作进程组成。

 


Storm集群组件

 

Nimbus和Supervisor节点之间所有的协调工作是通过Zookeeper集群来实现的。此外,Nimbus和Supervisor进程都是快速失败(fail-fast)和无状态(stateless)的;Storm集群所有的状态要么在Zookeeper集群中,要么存储在本地磁盘上。这意味着你可以用kill -9来杀死Nimbus和Supervisor进程,它们在重启后可以继续工作。这个设计使得Storm集群拥有不可思议的稳定性。

 

2. 安装Storm集群

2.1 搭建Zookeeper集群

参考zookeeper集群搭建

2.2 安装Jdk 7

1. 下载并安装JDK 7;

2. 配置JAVA_HOME环境变量;

3. 运行java、javac命令,测试java正常安装。

2.3 下载并解压Storm发布版本

下一步,需要在Nimbus和Supervisor机器上安装Storm发行版本。

1. 下载Storm发行版本,推荐使用Storm0.10.0:

2. 解压到安装目录下(/home/ipms/storm-0.10.0)

3. 修改storm.yaml配置文件

1) storm.zookeeper.servers: Storm集群使用的Zookeeper集群地址,其格式如下:

storm.zookeeper.servers:
  - "111.222.333.444"
  - "555.666.777.888"​

如果Zookeeper集群使用的不是默认端口,那么还需要storm.zookeeper.port选项。

2

环境搭建 storm    2019-05-06 06:51:53    70    0    0
# Building Heron on CentOS 7 ## Step 1 - Install the required dependencies sudo yum install gcc gcc-c++ kernel-devel wget unzip zlib-devel zip git automake cmake patch libtool -y yum install
环境搭建 hadoop cdh    2019-05-06 06:51:53    53    0    0

Before You Begin

SSH Configuration

vi /etc/hosts
x.x.x.x linode01
x.x.x.x linode02
x.x.x.x linode03

hostnamectl set-hostname linode01

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub linode01
ssh-copy-id -i ~/.ssh/id_rsa.pub linode02
ssh-copy-id -i ~/.ssh/id_rsa.pub linode03

Disable Firewall

systemctl stop firewalld.service
systemctl disable firewalld.service

Dependency

yum install psmisc -y
yum install libxslt-devel -y
yum install chkconfig bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse portmap fuse-libs redhat-lsb -y
yum install python-psycopg2 -y 
yum install snappy snappy-devel  -y

#NFS
yum install rpcbind -y
service rpcbind start

Install the Oracle JDK

cd /opt/
sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn/java/jdk/7u80-b15/jdk-7u80-linux-x64
环境搭建 hadoop cdh    2019-05-06 06:51:53    39    0    0

Before You Begin

SSH Configuration

vi /etc/hosts
x.x.x.x linode01
x.x.x.x linode02
x.x.x.x linode03

hostnamectl set-hostname linode01

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub linode01
ssh-copy-id -i ~/.ssh/id_rsa.pub linode02
ssh-copy-id -i ~/.ssh/id_rsa.pub linode03

Disable Firewall

systemctl stop firewalld.service
systemctl disable firewalld.service

Dependency

yum install psmisc -y
yum install libxslt-devel -y
yum install chkconfig bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse portmap fuse-libs redhat-lsb -y
yum install python-psycopg2 -y 
yum install snappy snappy-devel  -y

#NFS
yum install rpcbind -y
service rpcbind start

Install and Configure External Databases

sudo yum install postgresql-server postgresql -y
sudo su - postgres
initdb -D /var/lib/pgsql/data

#remote access
vi /var/lib/pgsql/data/postgresql.conf
listen_addresses ='*'

vi /var/lib/pgsql/data/pg_h
环境搭建 hadoop    2019-05-06 06:51:53    398    0    0

Overview

The NFS Gateway supports NFSv3 and allows HDFS to be mounted as part of the client’s local file system. Currently NFS Gateway supports and enables the following usage patterns:

  • Users can browse the HDFS file system through their local file system on NFSv3 client compatible operating systems.
  • Users can download files from the the HDFS file system on to their local file system.
  • Users can upload files from their local file system directly to the HDFS file system.
  • Users can stream data directly to HDFS through the mount point. File append is supported but random write is not supported.

The NFS gateway machine needs the same thing to run an HDFS client like Hadoop JAR files, HADOOP_CONF directory. The NFS gateway can be on the same host as DataNode, NameNode, or any HDFS client.

Configuration

in core-site.xml of the namenode, the following must be set( in non-secure mode)

<property>
  <name>hadoop.proxyuser.nfs
环境搭建 hadoop    2019-05-06 06:51:53    41    0    0

Install Scala

wget http://downloads.lightbend.com/scala/2.10.6/scala-2.10.6.tgz
tar zxvf scala-2.10.6.tgz

vi /etc/profile

export SCALA_HOME=/home/hadoop/scala-2.10.6
export PATH=$PATH:$SCALA_HOME/bin

source /etc/profile
scala -version

Install Spark

wget http://d3kbcqa49mib13.cloudfront.net/spark-1.6.2-bin-hadoop2.6.tgz
tar zxvf spark-1.6.2-bin-hadoop2.6.tgz
cd spark-1.6.2-bin-hadoop2.6/conf
cp spark-env.sh.template spark-env.sh

vi spark-env.sh

export JAVA_HOME=/opt/jdk1.8.0_91
export SCALA_HOME=/home/hadoop/scala-2.10.6
export HADOOP_HOME=/home/hadoop/hadoop-2.6.4
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
    export YARN_HOME=/home/hadoop/hadoop-2.6.4
    export YARN_CONF_DIR=${YARN_HOME}/etc/hadoop
export SPARK_HOME=/home/hadoop/spark-1.6.2-bin-hadoop2.6
export SPARK_LOCAL_DIRS=/home/hadoop/spark-1.6.2-bin-hadoop2.6
export SPARK_LIBARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native
    expo
1/4