download package from
wget https://nchc.dl.sourceforge.net/project/symmetricds/symmetricds/symmetricds-3.10/symmetric-server-3.10.3.zip
Once SymmetricDS has been installed, we will need to populate the database with the configuration and sym tables. To do this, execute the following steps:
create node properties file in enginnes folder. Each properties file in the engines directory represents a SymmetricDS node.
for pg.properties
## Licensed to JumpMind Inc under one or more contributor# license agreements. See the NOTICE file distributed# with this work for additional information regarding# copyright ownership. JumpMind Inc licenses this file# to you under the GNU General Public License, version 3.0 (GPLv3)# (the "License"); you may not use this file except in compliance# with the License.## You should have received a copy of the GNU General Pu
wget https://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.26-el7-x86_64.tar.gztar zxvf mysql-5.7.26-el7-x86_64.tar.gz
mysql_install_db --basedir=/usr/local/mysql --datadir=/data/mysqlmysqld --initialize --basedir=/usr/local/mysql --datadir=/data/mysql --user=pgdata
default config file
mysqld --verbose --help |grep -A 1 'Default options'
configure mysql
vi /etc/my.cnf[mysqld]basedir=/usr/local/mysqldatadir=/data/mysqlport=20769user=pgdatasocket=/data/mysql/mysql.sock# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0# Settings user and group are ignored when systemd is used.# If you need to run mysqld under a different user or group,# customize your systemd unit file for mariadb according to the# instructions in http://fedoraproject.org/wiki/Systemd[mysqld_safe]log-error=/data/mysql/mariadb.logpid-file=/data/mysql/mariadb.pid##
swapoff -a
configure proxy
#install ss clientyum install python-setuptools && easy_install pippip install shadowsocks#configurationvi /etc/shadowsocks.json{"server":"xxx.com","server_port":xxxx,"local_port":1080,"password":"xxx","timeout":600,"method":"aes-256-cfb"}#start ss clientsslocal -c /etc/shadowsocks.json -d start
install polipo convert ssproxy to httpproxy
git clone https://github.com/jech/polipo.gitcd polipomake allmake install
configure polipo
vi /etc/polipo.cfgsocksParentProxy = "127.0.0.1:1080"socksProxyType = socks5logFile = /var/log/polipologLevel = 99logSyslog = true
start polipo
nohup polipo -c /etc/polipo.cfg &
# Install Docker from CentOS/RHEL repository:yum install -y docker----------------------------# or install Docker CE 18.06 from Docker's CentOS repositories:## Install prerequisites.yum install yum-utils device-mapper-persistent-data lvm2## Add docker repository
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.
Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.
This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster.
Kubernetes v1.8 or higher is supported by Rook.
TL;DR -> (Too Long, Donot Read)
git clone https://github.com/rook/ro
在flink各个服务器上执行如下命令,配置ssh免密码登录
ssh-keygen -t rsassh-copy-id -i .ssh/id_rsa.pub flink-01ssh-copy-id -i .ssh/id_rsa.pub flink-02ssh-copy-id -i .ssh/id_rsa.pub flink-03
下载并安装jdk8
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-linux-i586.tar.gztar zxvf jdk-8u191-linux-i586.tar.gzmv jdk1.8.0_191 /opt
配置java环境变量
echo 'export JAVA_HOME=/opt/jdk1.8.0_181' >> ~/.bashrcecho 'export PATH=$PATH:$JAVA_HOME/bin' >> ~/.bashrcsource ~/.bashrc
flink的jobmanager借助zookeeper实现HA,因此需要先搭建一套zookeeper集群。
下载安装zookeeper
wget http://apache.website-solution.net/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gztar zxvf zookeeper-3.4.10.tar.gz
修改zookeeper配置文件
vi config/zoo.cfgdataDir=/var/zookeeper/clientPort=2181server.1=zoo-01:2888:3888server.2=zoo-0
FROM java:8-jre-alpine# Install requirementsRUN apk add --no-cache bash snappy# Flink environment variablesENV FLINK_INSTALL_PATH=/optENV FLINK_HOME $FLINK_INSTALL_PATH/flinkENV FLINK_LIB_DIR $FLINK_HOME/libENV PATH $PATH:$FLINK_HOME/bin# flink-dist can point to a directory or a tarball on the local systemARG flink_dist=NOT_SET# Install build dependencies and flinkADD $flink_dist $FLINK_INSTALL_PATHRUN set -x && \ln -s $FLINK_INSTALL_PATH/flink-* $FLINK_HOME && \ln -s $FLINK_INSTALL_PATH/job.jar $FLINK_LIB_DIR && \addgroup -S flink && adduser -D -S -H -G flink -h $FLINK_HOME flink && \chown -R flink:flink $FLINK_INSTALL_PATH/flink-* && \chown -h flink:flink $FLINK_HOMECOPY docker-entrypoint.sh /USER flinkEXPOSE 8081 6123ENTRYPOINT ["/docker-entrypoint.sh"]CMD ["help"]
FROM centos:centos7.5.1804# Install requirementsRUN yum install -y snappy wget telnet unzip cu
You can obtain a copy of the software by following the instructions on the OpenLDAP Software download page (http://www.openldap.org/software/download/). It is recommended that new users start with the latest release.
gunzip -c openldap-VERSION.tgz | tar xvfB -./configure --prefix=/home/mingjue/openldap2446make dependmakemake installmake testmkdir /home/mingjue/openldap2446/openldap-data
Use your favorite editor to edit the provided slapd.ldif example (usually installed as /home/mingjue/openldap2446/etc/openldap/slapd.ldif) to contain a MDB database definition of the form:
dn: olcDatabase=mdb,cn=configobjectClass: olcDatabaseConfigobjectClass: olcMdbConfigolcDatabase: mdbOlcDbMaxSize: 1073741824olcSuffix: dc=mingjue,dc=comolcRootDN: cn=Manager,dc=mingjue,dc=comolcRootPW: secretolcDbDirectory: /home/mingjue/openldap2446/openldap-dataolcDbIndex: objectClass eq
172.105.197.104 k-master
103.29.68.48 k-node01
172.104.116.143 k-node02
hostnamectl set-hostname k-master
vi /etc/hosts
systemctl stop firewalldsystemctl disable firewalld
yum install etcd kubernetes -yyum install *rhsm*
yum install flannel -y
vi /etc/etcd/etcd.conf:1,$s/localhost/0.0.0.0/gvi /etc/sysconfig/dockerOPTIONS='--selinux-enabled=false --insecure-registry gcr.io --log-driver=journald --signature-verification=false'vi /etc/kubernetes/apiserver#去掉ServiceAccountKUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"KUBE_ETCD_SERVERS="--etcd-servers=http://k-master:2379"KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"KUBE_API_ARGS="--insecure-port=8080 -
wget http://nginx.org/download/nginx-1.11.2.tar.gz tar zxvf nginx-1.11.2.tar.gz wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.37.tar.bz2 tar jxvf pcre-8.37.tar.bz2 wget http://zlib.net/zlib-1.2.8.tar.gz tar zxvf zlib-1.2.8.tar.gz
wget https://codeload.github.com/intaro/nginx-image-filter-watermark/zip/master unzip master cd nginx-image-filter-watermark-master/ cp ngx_http_image_filter_module.c ../nginx-1.11.2/src/http/modules/
cd nginx-1.11.2 sudo yum install openssl openssl-devel gd gd-devel ./configure --prefix=/root/programs/nginx --with-http_ssl_module --with-pcre=../pcre-8.37 --with-zlib=../zlib-1.2.8 --with-http_image_filter_module make make install
user root;
worker_processes 20;
http {
include mime.types;
default_type application/octet-stream;
client_max_body_size 10m;
server {
listen 10000;
server_name www.xx
To get started with Apache Ignite:
To start a cluster node with the default configuration, open the command shell and, assuming you are in IGNITE_HOME (Ignite installation folder), just type this:
$ bin/ignite.sh
By default ignite.sh starts a node with the default configuration which is config/default-config.xml.
If you want to use a custom configuration file, then pass it as a parameter to ignite.sh/bat as follows:
bin/ignite.sh examples/config/example-ignite.xml
The path to the configuration file can be absolute, or relative to either IGNITE_HOME (Ignite installation folder) or META-INF folder in your classpath.