2019-06-17 12:09:20    1    0    0

Download Binary

  1. wget https://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.26-el7-x86_64.tar.gz
  2. tar zxvf mysql-5.7.26-el7-x86_64.tar.gz

Init Database

  1. mysql_install_db --basedir=/usr/local/mysql --datadir=/data/mysql
  2. mysqld --initialize --basedir=/usr/local/mysql --datadir=/data/mysql --user=pgdata

title

Configuration

default config file

  1. mysqld --verbose --help |grep -A 1 'Default options'

configure mysql

  1. vi /etc/my.cnf
  2. [mysqld]
  3. basedir=/usr/local/mysql
  4. datadir=/data/mysql
  5. port=20769
  6. user=pgdata
  7. socket=/data/mysql/mysql.sock
  8. # Disabling symbolic-links is recommended to prevent assorted security risks
  9. symbolic-links=0
  10. # Settings user and group are ignored when systemd is used.
  11. # If you need to run mysqld under a different user or group,
  12. # customize your systemd unit file for mariadb according to the
  13. # instructions in http://fedoraproject.org/wiki/Systemd
  14. [mysqld_safe]
  15. log-error=/data/mysql/mariadb.log
  16. pid-file=/data/mysql/mariadb.pid
  17. #
  18. #
2019-05-06 06:51:53    36    0    0

PREFLIGHT CHECKLIST

The ceph-deploy tool operates out of a directory on an admin node. Any host with network connectivity and a modern python environment and ssh (such as Linux) should work.

CEPH-DEPLOY SETUP

register the target machine with subscription-manager, verify your subscriptions, and enable the “Extras” repository for package dependencies. For example:

  1. yum install subscription-manager -y
  2. sudo subscription-manager repos --enable=rhel-7-server-extras-rpms

Install and enable the Extra Packages for Enterprise Linux (EPEL) repository:

  1. sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Add the Ceph repository to your yum configuration file at /etc/yum.repos.d/ceph.repo with the following command. Replace {ceph-stable-release} with a stable Ceph release (e.g., luminous.) For example:

  1. cat << EOM > /etc/yum.repos.d/ceph.repo
  2. [ceph-noarch]
  3. name=Ceph noarch packages
  4. baseurl=https:
2019-05-06 06:51:53    89    0    0

Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.

This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster.

Kubernetes v1.8 or higher is supported by Rook.

TL;DR -> (Too Long, Donot Read)

Deployment rook ceph

fetch source

  1. git clone https://github.com/rook/ro
2019-05-06 06:51:53    55    0    0

Installation

precondition

disable firewall

  1. systemctl stop firewalld
  2. systemctl disable firewalld

disable swap

  1. swapoff -a

config hostname ensure local hostname reachable

  1. vi /etc/hosts
  2. 139.162.127.39 node1
  3. 139.162.121.213 node2
  4. 139.162.97.24 node3
  5. 139.162.127.39 apiserver.example.com

Install docker

  1. # Install Docker from CentOS/RHEL repository:
  2. yum install -y docker
  3. ----------------------------
  4. # or install Docker CE 18.06 from Docker's CentOS repositories:
  5. ## Install prerequisites.
  6. yum install yum-utils device-mapper-persistent-data lvm2
  7. ## Add docker repository.
  8. yum-config-manager \
  9. --add-repo \
  10. https://download.docker.com/linux/centos/docker-ce.repo
  11. ## Install docker.
  12. yum update && yum install docker-ce-18.06.1.ce
  13. ## Create /etc/docker directory.
  14. mkdir /etc/docker
  15. mkdir /docker
  16. # Setup daemon.
  17. cat > /etc/docker/daemon.json <<EOF
  18. {
  19. "log-driver": "json-file",
  20. "log-opts": {
  21. "max-size": "100m"
  22. },
  23. "storage-opts": [
  24. "over
2019-05-06 06:51:53    26    0    0

Build The Flink Docker Image

Dockerfile

  1. FROM java:8-jre-alpine
  2. # Install requirements
  3. RUN apk add --no-cache bash snappy
  4. # Flink environment variables
  5. ENV FLINK_INSTALL_PATH=/opt
  6. ENV FLINK_HOME $FLINK_INSTALL_PATH/flink
  7. ENV FLINK_LIB_DIR $FLINK_HOME/lib
  8. ENV PATH $PATH:$FLINK_HOME/bin
  9. # flink-dist can point to a directory or a tarball on the local system
  10. ARG flink_dist=NOT_SET
  11. # Install build dependencies and flink
  12. ADD $flink_dist $FLINK_INSTALL_PATH
  13. RUN set -x && \
  14. ln -s $FLINK_INSTALL_PATH/flink-* $FLINK_HOME && \
  15. ln -s $FLINK_INSTALL_PATH/job.jar $FLINK_LIB_DIR && \
  16. addgroup -S flink && adduser -D -S -H -G flink -h $FLINK_HOME flink && \
  17. chown -R flink:flink $FLINK_INSTALL_PATH/flink-* && \
  18. chown -h flink:flink $FLINK_HOME
  19. COPY docker-entrypoint.sh /
  20. USER flink
  21. EXPOSE 8081 6123
  22. ENTRYPOINT ["/docker-entrypoint.sh"]
  23. CMD ["help"]

  1. FROM centos:centos7.5.1804
  2. # Install requirements
  3. RUN yum install -y snappy wget telnet unzip cu
2019-05-06 06:51:53    28    0    0

Requirement

  1. swapoff -a

configure proxy

  1. #install ss client
  2. yum install python-setuptools && easy_install pip
  3. pip install shadowsocks
  4. #configuration
  5. vi /etc/shadowsocks.json
  6. {
  7. "server":"xxx.com",
  8. "server_port":xxxx,
  9. "local_port":1080,
  10. "password":"xxx",
  11. "timeout":600,
  12. "method":"aes-256-cfb"
  13. }
  14. #start ss client
  15. sslocal -c /etc/shadowsocks.json -d start

install polipo convert ssproxy to httpproxy

  1. git clone https://github.com/jech/polipo.git
  2. cd polipo
  3. make all
  4. make install

configure polipo

  1. vi /etc/polipo.cfg
  2. socksParentProxy = "127.0.0.1:1080"
  3. socksProxyType = socks5
  4. logFile = /var/log/polipo
  5. logLevel = 99
  6. logSyslog = true

start polipo

  1. nohup polipo -c /etc/polipo.cfg &

Install docker

  1. # Install Docker from CentOS/RHEL repository:
  2. yum install -y docker
  3. ----------------------------
  4. # or install Docker CE 18.06 from Docker's CentOS repositories:
  5. ## Install prerequisites.
  6. yum install yum-utils device-mapper-persistent-data lvm2
  7. ## Add docker repository
2019-05-06 06:51:53    13    0    0

Build openLDAP

You can obtain a copy of the software by following the instructions on the OpenLDAP Software download page (http://www.openldap.org/software/download/). It is recommended that new users start with the latest release.

  1. gunzip -c openldap-VERSION.tgz | tar xvfB -
  2. ./configure --prefix=/home/mingjue/openldap2446
  3. make depend
  4. make
  5. make install
  6. make test
  7. mkdir /home/mingjue/openldap2446/openldap-data

Edit the configuration file.

Use your favorite editor to edit the provided slapd.ldif example (usually installed as /home/mingjue/openldap2446/etc/openldap/slapd.ldif) to contain a MDB database definition of the form:

  1. dn: olcDatabase=mdb,cn=config
  2. objectClass: olcDatabaseConfig
  3. objectClass: olcMdbConfig
  4. olcDatabase: mdb
  5. OlcDbMaxSize: 1073741824
  6. olcSuffix: dc=mingjue,dc=com
  7. olcRootDN: cn=Manager,dc=mingjue,dc=com
  8. olcRootPW: secret
  9. olcDbDirectory: /home/mingjue/openldap2446/openldap-data
  10. olcDbIndex: objectClass eq

Impor

2019-05-06 06:51:53    15    0    0

环境准备

配置ssh互信

在flink各个服务器上执行如下命令,配置ssh免密码登录

  1. ssh-keygen -t rsa
  2. ssh-copy-id -i .ssh/id_rsa.pub flink-01
  3. ssh-copy-id -i .ssh/id_rsa.pub flink-02
  4. ssh-copy-id -i .ssh/id_rsa.pub flink-03

配置java环境

下载并安装jdk8

  1. wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-linux-i586.tar.gz
  2. tar zxvf jdk-8u191-linux-i586.tar.gz
  3. mv jdk1.8.0_191 /opt

配置java环境变量

  1. echo 'export JAVA_HOME=/opt/jdk1.8.0_181' >> ~/.bashrc
  2. echo 'export PATH=$PATH:$JAVA_HOME/bin' >> ~/.bashrc
  3. source ~/.bashrc

zookeeper集群搭建

flink的jobmanager借助zookeeper实现HA,因此需要先搭建一套zookeeper集群。

下载安装zookeeper

  1. wget http://apache.website-solution.net/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
  2. tar zxvf zookeeper-3.4.10.tar.gz

修改zookeeper配置文件

  1. vi config/zoo.cfg
  2. dataDir=/var/zookeeper/
  3. clientPort=2181
  4. server.1=zoo-01:2888:3888
  5. server.2=zoo-0
2019-05-06 06:51:53    15    0    0

主机规划

主机节点

172.105.197.104 k-master
103.29.68.48 k-node01
172.104.116.143 k-node02

设置hostname

hostnamectl set-hostname k-master

配置hosts文件

vi /etc/hosts

master节点安装

关闭防火墙

  1. systemctl stop firewalld
  2. systemctl disable firewalld

安装kubernetes和etcd

  1. yum install etcd kubernetes -y
  2. yum install *rhsm*

安装flannel

  1. yum install flannel -y

配置master节点

  1. vi /etc/etcd/etcd.conf
  2. :1,$s/localhost/0.0.0.0/g
  3. vi /etc/sysconfig/docker
  4. OPTIONS='--selinux-enabled=false --insecure-registry gcr.io --log-driver=journald --signature-verification=false'
  5. vi /etc/kubernetes/apiserver
  6. #去掉ServiceAccount
  7. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
  8. KUBE_ETCD_SERVERS="--etcd-servers=http://k-master:2379"
  9. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
  10. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
  11. KUBE_API_ARGS="--insecure-port=8080 -
2019-05-06 06:51:53    17    0    0

Install ignite

To get started with Apache Ignite:

  • Download Apache Ignite as a ZIP archive from https://ignite.apache.org/
  • Unzip the ZIP archive into the installation folder in your system
  • (Optional) Set IGNITE_HOME environment variable to point to the installation folder and make sure there is no trailing / in the path

To start a cluster node with the default configuration, open the command shell and, assuming you are in IGNITE_HOME (Ignite installation folder), just type this:

  1. $ bin/ignite.sh

By default ignite.sh starts a node with the default configuration which is config/default-config.xml.

If you want to use a custom configuration file, then pass it as a parameter to ignite.sh/bat as follows:

  1. bin/ignite.sh examples/config/example-ignite.xml

The path to the configuration file can be absolute, or relative to either IGNITE_HOME (Ignite installation folder) or META-INF folder in your classpath.

Install Ignite Web

1/7