Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.
Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.
This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster.
Kubernetes v1.8 or higher is supported by Rook.
TL;DR -> (Too Long, Donot Read)
git clone https://github.com/rook/rook
disable firewall
systemctl stop firewalld
systemctl disable firewalld
disable swap
swapoff -a
config hostname ensure local hostname
reachable
vi /etc/hosts
139.162.127.39 node1
139.162.121.213 node2
139.162.97.24 node3
139.162.127.39 apiserver.example.com
# Install Docker from CentOS/RHEL repository:
yum install -y docker
----------------------------
# or install Docker CE 18.06 from Docker's CentOS repositories:
## Install prerequisites.
yum install yum-utils device-mapper-persistent-data lvm2
## Add docker repository.
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
## Install docker.
yum update && yum install docker-ce-18.06.1.ce
## Create /etc/docker directory.
mkdir /etc/docker
mkdir /docker
# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-opts": [
"overlay2.override_kernel_
swapoff -a
# Install Docker from CentOS/RHEL repository:
yum install -y docker
----------------------------
# or install Docker CE 18.06 from Docker's CentOS repositories:
## Install prerequisites.
yum install yum-utils device-mapper-persistent-data lvm2
## Add docker repository.
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
## Install docker.
yum update && yum install docker-ce-18.06.1.ce
## Create /etc/docker directory.
mkdir /etc/docker
# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"bip": "172.17.0.1/24"
}
EOF
// "bip": "192.168.35.1/24"
// ip link del docker0
mkdir -p /etc/systemd/system/docker.service.d
# Restart docker.
systemctl daemon-reload
systemctl restart docker
Some users on RHE
在flink各个服务器上执行如下命令,配置ssh免密码登录
ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub flink-01
ssh-copy-id -i .ssh/id_rsa.pub flink-02
ssh-copy-id -i .ssh/id_rsa.pub flink-03
下载并安装jdk8
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-linux-i586.tar.gz
tar zxvf jdk-8u191-linux-i586.tar.gz
mv jdk1.8.0_191 /opt
配置java环境变量
echo 'export JAVA_HOME=/opt/jdk1.8.0_181' >> ~/.bashrc
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> ~/.bashrc
source ~/.bashrc
flink的jobmanager借助zookeeper实现HA,因此需要先搭建一套zookeeper集群。
下载安装zookeeper
wget http://apache.website-solution.net/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
tar zxvf zookeeper-3.4.10.tar.gz
修改zookeeper配置文件
vi config/zoo.cfg
dataDir=/var/zookeeper/
clientPort=2181
server.1=zoo-01:2888:3888
server.2=zoo-0
FROM java:8-jre-alpine
# Install requirements
RUN apk add --no-cache bash snappy
# Flink environment variables
ENV FLINK_INSTALL_PATH=/opt
ENV FLINK_HOME $FLINK_INSTALL_PATH/flink
ENV FLINK_LIB_DIR $FLINK_HOME/lib
ENV PATH $PATH:$FLINK_HOME/bin
# flink-dist can point to a directory or a tarball on the local system
ARG flink_dist=NOT_SET
# Install build dependencies and flink
ADD $flink_dist $FLINK_INSTALL_PATH
RUN set -x && \
ln -s $FLINK_INSTALL_PATH/flink-* $FLINK_HOME && \
ln -s $FLINK_INSTALL_PATH/job.jar $FLINK_LIB_DIR && \
addgroup -S flink && adduser -D -S -H -G flink -h $FLINK_HOME flink && \
chown -R flink:flink $FLINK_INSTALL_PATH/flink-* && \
chown -h flink:flink $FLINK_HOME
COPY docker-entrypoint.sh /
USER flink
EXPOSE 8081 6123
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["help"]
FROM centos:centos7.5.1804
# Install requirements
RUN yum install -y snappy wget telnet unzip cu
You can obtain a copy of the software by following the instructions on the OpenLDAP Software download page (http://www.openldap.org/software/download/). It is recommended that new users start with the latest release.
gunzip -c openldap-VERSION.tgz | tar xvfB -
./configure --prefix=/home/mingjue/openldap2446
make depend
make
make install
make test
mkdir /home/mingjue/openldap2446/openldap-data
Use your favorite editor to edit the provided slapd.ldif example (usually installed as /home/mingjue/openldap2446/etc/openldap/slapd.ldif) to contain a MDB database definition of the form:
dn: olcDatabase=mdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: mdb
OlcDbMaxSize: 1073741824
olcSuffix: dc=mingjue,dc=com
olcRootDN: cn=Manager,dc=mingjue,dc=com
olcRootPW: secret
olcDbDirectory: /home/mingjue/openldap2446/openldap-data
olcDbIndex: objectClass eq
172.105.197.104 k-master
103.29.68.48 k-node01
172.104.116.143 k-node02
hostnamectl set-hostname k-master
vi /etc/hosts
systemctl stop firewalld
systemctl disable firewalld
yum install etcd kubernetes -y
yum install *rhsm*
yum install flannel -y
vi /etc/etcd/etcd.conf
:1,$s/localhost/0.0.0.0/g
vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --insecure-registry gcr.io --log-driver=journald --signature-verification=false'
vi /etc/kubernetes/apiserver
#去掉ServiceAccount
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://k-master:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS="--insecure-port=8080 -
wget http://nginx.org/download/nginx-1.11.2.tar.gz tar zxvf nginx-1.11.2.tar.gz wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.37.tar.bz2 tar jxvf pcre-8.37.tar.bz2 wget http://zlib.net/zlib-1.2.8.tar.gz tar zxvf zlib-1.2.8.tar.gz
wget https://codeload.github.com/intaro/nginx-image-filter-watermark/zip/master unzip master cd nginx-image-filter-watermark-master/ cp ngx_http_image_filter_module.c ../nginx-1.11.2/src/http/modules/
cd nginx-1.11.2 sudo yum install openssl openssl-devel gd gd-devel ./configure --prefix=/root/programs/nginx --with-http_ssl_module --with-pcre=../pcre-8.37 --with-zlib=../zlib-1.2.8 --with-http_image_filter_module make make install
user root; worker_processes 20; http { include mime.types; default_type application/octet-stream; client_max_body_size 10m; server { listen 10000; server_name www.xx
yum install https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-1.noarch.rpm
yum install postgresql10
yum install postgresql10-server
/usr/pgsql-10/bin/postgresql-10-setup initdb
systemctl enable postgresql-10
systemctl start postgresql-10
su - postgres
psql
\password postgres
xxxpgxxx
wget https://ftp.postgresql.org/pub/source/v10.0/postgresql-10.0.tar.gz
tar zxvf postgresql-10.0.tar.gz
cd postgresql-10.0
export PYTHON=/usr/bin/python3
./configure --prefix=/usr/local/postgres10 --with-python --with-openssl --with-libxml --with-ldap --with-libxslt --enable-thread-safety
make
make install
mkdir /home/pgdata/pg10/pgdata
./initdb -D /home/pgdata/pg10/pgdata
psql
\pass
To get started with Apache Ignite:
To start a cluster node with the default configuration, open the command shell and, assuming you are in IGNITE_HOME (Ignite installation folder), just type this:
$ bin/ignite.sh
By default ignite.sh starts a node with the default configuration which is config/default-config.xml.
If you want to use a custom configuration file, then pass it as a parameter to ignite.sh/bat as follows:
bin/ignite.sh examples/config/example-ignite.xml
The path to the configuration file can be absolute, or relative to either IGNITE_HOME (Ignite installation folder) or META-INF folder in your classpath.