Debian / Ubuntu:
apt-get install python-pip
pip install shadowsocks
CentOS:
yum install python-setuptools && easy_install pip
#curl -o get-pip.py https://bootstrap.pypa.io/pip/2.7/get-pip.py && python get-pip.py
pip install shadowsocks
Windows:
See [Install Server on Windows]
ssserver -p 443 -k password -m aes-256-cfb
To run in the background:
sudo ssserver -p 443 -k password -m aes-256-cfb --user nobody -d start
ssserver --log-file /var/log/shadowsocks-12300.log --pid-file /var/run/shadowsocks-12300.pid --user nobody -p 12300 -k password -m aes-256-cfb -s 0.0.0.0 -d start
To stop:
sudo ssserver -d stop
To check the log:
sudo less /var/log/shadowsocks.log
Check all the options via -h
. You can also use a [Configuration] file
instead.
Use GUI clients on your local PC/phones. Check the README of your client
for more information.
yum install ntp
vi /etc/ntp.conf
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
systemctl enable ntpd
systemctl start ntpd
wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.5/hadoop-2.7.5.tar.gz
tar zxvf hadoop-2.7.5.tar.gz
cd hadoop-2.7.5/
Config Env
vi .bashrc
export JAVA_HOME=/opt/jdk1.8.0_202
export HADOOP_PID_DIR=/data/hadooptemp
Configure slaves
vi slaves
test01
test02
test03
Configure core-site.xml
mkdir /data/hadoop
mkdir /data/hadooptemp
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop</value>
<description>
A base for other temporary directories.
</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster01</value>
<descr
wget http://nginx.org/download/nginx-1.11.2.tar.gz tar zxvf nginx-1.11.2.tar.gz wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.37.tar.bz2 tar jxvf pcre-8.37.tar.bz2 wget http://zlib.net/zlib-1.2.8.tar.gz tar zxvf zlib-1.2.8.tar.gz
wget https://codeload.github.com/intaro/nginx-image-filter-watermark/zip/master unzip master cd nginx-image-filter-watermark-master/ cp ngx_http_image_filter_module.c ../nginx-1.11.2/src/http/modules/
cd nginx-1.11.2 sudo yum install openssl openssl-devel gd gd-devel ./configure --prefix=/root/programs/nginx --with-http_ssl_module --with-pcre=../pcre-8.37 --with-zlib=../zlib-1.2.8 --with-http_image_filter_module make make install
user root; worker_processes 20; http { include mime.types; default_type application/octet-stream; client_max_body_size 10m; server { listen 10000; server_name www.xx
版本 12.1
输入“ip addr”并按回车键确定,发现无法获取IP(CentOS 7默认没有ifconfig命令),记录下网卡名称(本例中为eno16777736)。
输入“cd /etc/sysconfig/network-scripts/”按回车键确定,继续输入“ls”按回车键查看文件。
输入“sudo vi ifcfg-eno16777736”并按回车键确定(网卡名称可能不同)。亦可在第二步直接输入“cd /etc/sysconfig/network-scripts/ifcfg-eno16777736”直接编辑文件。
查看最后一项(红色框内),发现为“ONBOOT=no”
按“i”键进入编辑状态,将最后一行“no”修改为“yes”,然后按“ESC”键退出编辑状态,并输入“:x”保存退出。
输入“service network restart”重启服务,亦可输入“systemctl restart netwrok”。
再次输入“ip addr”查看,现已可自动获取IP地址。
输入“sudo vi ifcfg-eno16777736”并按回车键确定(网卡名称可能不同)。亦可在第二步直接输入“cd /etc/sysconfig/network-scripts/ifcfg-eno16777736”直接编辑文件。
设置为“BOOTPROTO='static'”(如设置为none则禁止DHCP,static则启用静态IP地址,设置为dhcp则为开启DHCP服务).
并修改其他部分设置, 本例中为192.168.1.200/24,GW:192.168.1.1。 注意:NM_CONTROLLED=no和ONBOOT=yes可根据需求进行设置。
BOOTPROTO="static" #dhcp改为static ONBOOT="y
Storm集群中包含两类节点:主控节点(Master Node)和工作节点(Work Node)。其分别对应的角色如下:
Storm集群组件
Nimbus和Supervisor节点之间所有的协调工作是通过Zookeeper集群来实现的。此外,Nimbus和Supervisor进程都是快速失败(fail-fast)和无状态(stateless)的;Storm集群所有的状态要么在Zookeeper集群中,要么存储在本地磁盘上。这意味着你可以用kill -9来杀死Nimbus和Supervisor进程,它们在重启后可以继续工作。这个设计使得Storm集群拥有不可思议的稳定性。
1. 下载并安装JDK 7;
2. 配置JAVA_HOME环境变量;
3. 运行java、javac命令,测试java正常安装。
下一步,需要在Nimbus和Supervisor机器上安装Storm发行版本。
1. 下载Storm发行版本,推荐使用Storm0.10.0:
2. 解压到安装目录下(/home/ipms/storm-0.10.0)
3. 修改storm.yaml配置文件
1) storm.zookeeper.servers: Storm集群使用的Zookeeper集群地址,其格式如下:
storm.zookeeper.servers: - "111.222.333.444" - "555.666.777.888"
如果Zookeeper集群使用的不是默认端口,那么还需要storm.zookeeper.port选项。
2
vi /etc/hosts
x.x.x.x linode01
x.x.x.x linode02
x.x.x.x linode03
hostnamectl set-hostname linode01
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub linode01
ssh-copy-id -i ~/.ssh/id_rsa.pub linode02
ssh-copy-id -i ~/.ssh/id_rsa.pub linode03
systemctl stop firewalld.service
systemctl disable firewalld.service
yum install psmisc -y
yum install libxslt-devel -y
yum install chkconfig bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse portmap fuse-libs redhat-lsb -y
yum install python-psycopg2 -y
yum install snappy snappy-devel -y
#NFS
yum install rpcbind -y
service rpcbind start
cd /opt/
sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn/java/jdk/7u80-b15/jdk-7u80-linux-x64
vi /etc/hosts
x.x.x.x linode01
x.x.x.x linode02
x.x.x.x linode03
hostnamectl set-hostname linode01
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub linode01
ssh-copy-id -i ~/.ssh/id_rsa.pub linode02
ssh-copy-id -i ~/.ssh/id_rsa.pub linode03
systemctl stop firewalld.service
systemctl disable firewalld.service
yum install psmisc -y
yum install libxslt-devel -y
yum install chkconfig bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse portmap fuse-libs redhat-lsb -y
yum install python-psycopg2 -y
yum install snappy snappy-devel -y
#NFS
yum install rpcbind -y
service rpcbind start
sudo yum install postgresql-server postgresql -y
sudo su - postgres
initdb -D /var/lib/pgsql/data
#remote access
vi /var/lib/pgsql/data/postgresql.conf
listen_addresses ='*'
vi /var/lib/pgsql/data/pg_h
The NFS Gateway supports NFSv3 and allows HDFS to be mounted as part of the client’s local file system. Currently NFS Gateway supports and enables the following usage patterns:
The NFS gateway machine needs the same thing to run an HDFS client like Hadoop JAR files, HADOOP_CONF directory. The NFS gateway can be on the same host as DataNode, NameNode, or any HDFS client.
in core-site.xml of the namenode, the following must be set( in non-secure mode)
<property>
<name>hadoop.proxyuser.nfs
wget http://downloads.lightbend.com/scala/2.10.6/scala-2.10.6.tgz
tar zxvf scala-2.10.6.tgz
vi /etc/profile
export SCALA_HOME=/home/hadoop/scala-2.10.6
export PATH=$PATH:$SCALA_HOME/bin
source /etc/profile
scala -version
wget http://d3kbcqa49mib13.cloudfront.net/spark-1.6.2-bin-hadoop2.6.tgz
tar zxvf spark-1.6.2-bin-hadoop2.6.tgz
cd spark-1.6.2-bin-hadoop2.6/conf
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export JAVA_HOME=/opt/jdk1.8.0_91
export SCALA_HOME=/home/hadoop/scala-2.10.6
export HADOOP_HOME=/home/hadoop/hadoop-2.6.4
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_HOME=/home/hadoop/hadoop-2.6.4
export YARN_CONF_DIR=${YARN_HOME}/etc/hadoop
export SPARK_HOME=/home/hadoop/spark-1.6.2-bin-hadoop2.6
export SPARK_LOCAL_DIRS=/home/hadoop/spark-1.6.2-bin-hadoop2.6
export SPARK_LIBARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native
expo