diff --git a/README.md b/README.md index 1a976ed..b121c1d 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,6 @@ Principles/goals of this environment: * Sample databases * Misc: local repos for conference VMs, - ## Walkthrough This section should get you up and running. @@ -95,7 +94,6 @@ instance_name_prefix: SOME_NAME_PREFIX default_vpc_subnet_id: subnet-896602d0 ``` - #### Multi-region AWS Multi-region can be supported by adding a 'regions' hash to the .aws_secrets file: @@ -122,24 +120,29 @@ regions: Note that the default 'keypair_name' and 'keypair_path' can still be used. Region will default to 'us-east-1' unless you specifically override it. -#### Boxes and multi-region +#### Boxes and Multiple AWS Regions + +AMI's are region-specific. The AWS Vagrant boxes you use must include AMI's for each region in which you wish to deploy. -Note that the aws Vagrant boxes you use must include AMI's in each region. For example, see the regions listed here: https://vagrantcloud.com/grypyrg/centos-x86_64. Packer, which is used to build this box, can be configured to add more regions if desired, but it requires building a new box. +For an example, see the regions listed here: https://vagrantcloud.com/grypyrg/centos-x86_64. -#### VPC integration +Packer, which is used to build this box, can be configured to add more regions if desired, but it requires building a new box. -The latest versions of my grypyrg/centos-x86-64 boxes require VPC. Currently this software supports passing a vpc_subnet_id per instance in one of two ways: +#### AWS VPC Integration -1. Set the default_vpc_subnet_id in the ~/.aws_secrets file. This can either be global or per-region. -1. Pass a subnet_id into the provider_aws method in the vagrant-common.rb file. +The latest versions of grypyrg/centos-x86-64 boxes require a VPC since AWS now requires VPC for all instances. +As shown in the example above, you must set the `default_vpc_subnet_id` in the ~/.aws_secrets file. You can override this on a per-region basis. + +You can also pass a `subnet_id` into the `provider_aws` method using an override in your Vagrantfile. ### Clone this repo ```bash git clone cd vagrant-percona -git submodule init; git submodule update +git submodule init +git submodule update --recursive ``` ### Launch the box @@ -156,42 +159,70 @@ vagrant ssh When you create a lot of vagrant environments with vagrant-percona, creating/renaming those Vagrantfile files can get quite messy easily. -The repository contains a small script that allows you to create a new environment, which will build a new directory with the proper Vagrantfile files and links to the puppet code. If you're setting up a PXC environment, symlinks will also be provided to the necessary pxc-bootstrap.sh script. +The repository contains a small script that allows you to create a new environment, which will build a new directory with the proper Vagrantfile files and links to the puppet code. This allows you to have many many Vagrant environments configured simultaneously. ```bash -vagrant-percona$ ./create-new-env.sh single_node ~/vagrant/percona-toolkit-ptosc-plugin-ptheartbeat +vagrant-percona$ ./create-new-env.sh single_node ~/vagrant/testing-issue-428 Creating 'single_node' Environment -percona-toolkit-ptosc-plugin-ptheartbeat gryp$ vagrant up --provider=aws -percona-toolkit-ptosc-plugin-ptheartbeat gryp$ vagrant ssh -``` - -## Cleanup - -### Shutdown the vagrant instance(s) - -``` -vagrant destroy +vagrant-percona$ cd ~/vagrant/testing-issue-428 +~/vagrant/testing-issue-428$ vagrant up --provider=aws +~/vagrant/testing-issue-428$ vagrant ssh ``` ## Master/Slave +This Vagrantfile will launch 2 (or more; edit the file and uncomment proper build line) MySQL servers in either VirtualBox or AWS. Running the ms-setup.pl script will set the first instance to be the master and all remaining nodes to be async slaves. + ```bash ln -sf Vagrantfile.ms.rb Vagrantfile -vagrant up +vagrant up --provider [aws|virtualbox] ./ms-setup.pl ``` ## PXC +This Vagrantfile will launch 3 Percona 5.7 XtraDB Cluster nodes in either VirtualBox or AWS. The InnoDB Buffer Pool is set to 128MB. The first node is automatically bootstrapped to form the cluster. The remaining 2 nodes will join the first to form the cluster. + +Each Virtualbox instance is launched with 256MB of memory. + +Each EC2 instance will use the `m3.medium` instance type, which has 3.75GB of RAM. + ```bash ln -sf Vagrantfile.pxc.rb Vagrantfile vagrant up -./pxc-bootstrap.sh ``` +__NOTE:__ Due to Vagrant being able to parallel build in AWS, there is no guarantee "node 1" will bootstrap before the other 2. If this happens, node 2 and node 3 will be unable to join the cluster. It is therfore recommended you launch node 1 manually, first, then launch the remaining nodes. _(This is not an issue with Virtualbox as parallel builds are not supported.)_ + +Example: + +```bash +vagrant up node1 && sleep 5 && vagrant up +``` + +## PXC (Big) + +This Vagrantfile will launch 3 Percona 5.7 XtraDB Cluster nodes in either VirtualBox or AWS. The InnoDB Buffer Pool is set to _12GB_. + +__WARNING:__ This requires a virtual machine with 15GB of RAM. Most consumer laptops and desktops do not have the RAM requirements to run multiple nodes of this configuration. + +Each EC2 instance will use the `m3.xlarge` instance type, which has 15GB of RAM. + +```bash +ln -sf Vagrantfile.pxc-big.rb Vagrantfile +vagrant up +``` + +__NOTE:__ Due to Vagrant being able to parallel build in AWS, there is no guarantee "node 1" will bootstrap before the other 2. If this happens, node 2 and node 3 will be unable to join the cluster. It is therfore recommended you launch node 1 manually, first, then launch the remaining nodes. _(This is not an issue with Virtualbox as parallel builds are not supported.)_ + +Example: + +```bash +vagrant up node1 && sleep 5 && vagrant up +``` ## Using this repo to create benchmarks @@ -210,7 +241,10 @@ vagrant up ... ``` -# Future Stuff +## Cleanup +### Shutdown the vagrant instance(s) -* Virtualbox support +``` +vagrant destroy +``` diff --git a/Vagrantfile.ms.rb b/Vagrantfile.ms.rb index ff3a2d0..36d5083 100644 --- a/Vagrantfile.ms.rb +++ b/Vagrantfile.ms.rb @@ -1,14 +1,19 @@ # -*- mode: ruby -*- # vi: set ft=ruby : +# To create multiple slaves, read the instructions near the end +# of this file. + require File.dirname(__FILE__) + '/lib/vagrant-common.rb' def build_box( config, name, ip, server_id ) - mysql_version = "56" - config.vm.define name do |node_config| - node_config.vm.hostname = name - node_config.vm.network :private_network, ip: ip - + mysql_version = "57" + + config.vm.define name do |node_config| + node_config.vm.hostname = name + node_config.vm.network :private_network, ip: ip, adaptor: 1, auto_config: false + node_config.vm.provision :hostmanager + # Provisioners provision_puppet( node_config, "base.pp" ) provision_puppet( node_config, "percona_server.pp" ) { |puppet| @@ -17,7 +22,7 @@ def build_box( config, name, ip, server_id ) "percona_server_version" => mysql_version, "innodb_buffer_pool_size" => "128M", "innodb_log_file_size" => "64M", - "server_id" => server_id + "server_id" => server_id } } provision_puppet( node_config, "percona_client.pp" ) { |puppet| @@ -28,26 +33,30 @@ def build_box( config, name, ip, server_id ) # Providers provider_virtualbox( nil, node_config, 256 ) { |vb, override| + vb.linked_clone = true provision_puppet( override, "percona_server.pp" ) {|puppet| - puppet.facter = {"datadir_dev" => "dm-2"} + puppet.facter = { + "default_interface" => "eth1", + "datadir_dev" => "dm-2" + } } } - - provider_aws( name, node_config, 'm3.medium') { |aws, override| + + provider_aws( name, node_config, 'm3.medium') { |aws, override| aws.block_device_mapping = [ - { - 'DeviceName' => "/dev/sdl", - 'VirtualName' => "mysql_data", - 'Ebs.VolumeSize' => 20, - 'Ebs.DeleteOnTermination' => true, - } + { + 'DeviceName' => "/dev/sdl", + 'VirtualName' => "mysql_data", + 'Ebs.VolumeSize' => 20, + 'Ebs.DeleteOnTermination' => true, + } ] provision_puppet( override, "percona_server.pp" ) {|puppet| puppet.facter = {"datadir_dev" => "xvdl"} } - } + } end - + if block_given? yield end @@ -56,10 +65,14 @@ def build_box( config, name, ip, server_id ) Vagrant.configure("2") do |config| - config.vm.box = "grypyrg/centos-x86_64" - config.ssh.username = "vagrant" - - build_box( config, 'master', '192.168.70.2', '1' ) - build_box( config, 'slave', '192.168.70.3', '2' ) -end + config.vm.box = "grypyrg/centos-x86_64" + config.ssh.username = "vagrant" + + build_box( config, 'master', '192.168.70.2', '1' ) + build_box( config, 'slave1', '192.168.70.3', '2' ) + # Uncomment the line below to build a 3rd slave. You can add more + # lines like this one to have more nodes. Be sure to adjust the + # parameters to prevent duplicates. + #build_box( config, 'slave2', '192.168.70.4', '3' ) +end diff --git a/Vagrantfile.pxc-big.rb b/Vagrantfile.pxc-big.rb index a6705b6..3601e04 100644 --- a/Vagrantfile.pxc-big.rb +++ b/Vagrantfile.pxc-big.rb @@ -15,8 +15,8 @@ # AWS configuration aws_region = "us-west-1" -aws_ips='private' # Use 'public' for cross-region AWS. 'private' otherwise (or commented out) -pxc_security_groups = ['default','pxc'] +aws_ips='private' # Use 'public' for cross-region AWS. 'private' otherwise (or commented out) +pxc_security_groups = ['sg-b4438ad3'] cluster_address = 'gcomm://' + Array.new( pxc_nodes ){ |i| pxc_node_name_prefix + (i+1).to_s }.join(',') @@ -24,79 +24,70 @@ config.vm.box = "grypyrg/centos-x86_64" config.ssh.username = "vagrant" - # Create the PXC nodes - (1..pxc_nodes).each do |i| - name = pxc_node_name_prefix + i.to_s - config.vm.define name do |node_config| - node_config.vm.hostname = name - node_config.vm.network :private_network, type: "dhcp" - node_config.vm.provision :hostmanager - - # Provisioners - provision_puppet( node_config, "pxc_server.pp" ) { |puppet| - puppet.facter = { - # PXC setup - "percona_server_version" => pxc_version, - 'innodb_buffer_pool_size' => '12G', - 'innodb_log_file_size' => '1G', - 'innodb_flush_log_at_trx_commit' => '0', - 'pxc_bootstrap_node' => (i == 1 ? true : false ), - 'wsrep_cluster_address' => cluster_address, - 'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=1024; evs.user_send_window=512; evs.send_window=512', - 'wsrep_slave_threads' => 8, - - # Sysbench setup - 'sysbench_load' => (i == 1 ? true : false ), - 'tables' => 20, - 'rows' => 1000000, - 'threads' => 8, - # 'tx_rate' => 10, - - # PCT setup - 'percona_agent_api_key' => ENV['PERCONA_AGENT_API_KEY'] - } - } + # Create the PXC nodes + (1..pxc_nodes).each do |i| + name = pxc_node_name_prefix + i.to_s + config.vm.define name do |node_config| + node_config.vm.hostname = name + node_config.vm.network :private_network, type: "dhcp" + node_config.vm.provision :hostmanager - # Providers - provider_virtualbox( nil, node_config, 256 ) { |vb, override| - provision_puppet( override, "pxc_server.pp" ) {|puppet| - puppet.facter = { - 'default_interface' => 'eth1', - - # PXC Setup - 'datadir_dev' => 'dm-2', - } - } - } - provider_vmware( name, node_config, 256 ) { |vb, override| - provision_puppet( override, "pxc_server.pp" ) {|puppet| - puppet.facter = { - 'default_interface' => 'eth1', - - # PXC Setup - 'datadir_dev' => 'dm-2', - } - } - } - - provider_aws( "PXC #{name}", node_config, 'm3.xlarge', aws_region, pxc_security_groups, aws_ips) { |aws, override| - aws.block_device_mapping = [ - { 'DeviceName' => "/dev/sdb", 'VirtualName' => "ephemeral0" }, - { 'DeviceName' => "/dev/sdc", 'VirtualName' => "ephemeral1" } - ] - provision_puppet( override, "pxc_server.pp" ) {|puppet| puppet.facter = { - 'softraid' => true, - 'softraid_dev' => '/dev/md0', - 'softraid_level' => 'stripe', - 'softraid_devices' => '2', - 'softraid_dev_str' => '/dev/xvdb /dev/xvdc', + # Provisioners + provision_puppet( node_config, "pxc_server.pp" ) { |puppet| + puppet.facter = { + # PXC setup + "percona_server_version" => pxc_version, + 'innodb_buffer_pool_size' => '12G', + 'innodb_log_file_size' => '1G', + 'innodb_flush_log_at_trx_commit' => '0', + 'pxc_bootstrap_node' => (i == 1 ? true : false ), + 'wsrep_cluster_address' => cluster_address, + 'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=1024; evs.user_send_window=512; evs.send_window=512', + 'wsrep_slave_threads' => 8, - 'datadir_dev' => 'md0' - }} - } + # Sysbench setup on node 1 + 'sysbench_load' => (i == 1 ? true : false ), + 'tables' => 20, + 'rows' => 1000000, + 'threads' => 8 + } + } - end - end - -end + # Providers + provider_virtualbox( nil, node_config, 18432 ) { |vb, override| + provision_puppet( override, "pxc_server.pp" ) { |puppet| + puppet.facter = { + 'default_interface' => 'eth1', + 'datadir_dev' => 'dm-2', + } + } + } + provider_vmware( name, node_config, 18432 ) { |vb, override| + provision_puppet( override, "pxc_server.pp" ) { |puppet| + puppet.facter = { + 'default_interface' => 'eth1', + 'datadir_dev' => 'dm-2', + } + } + } + + provider_aws( "PXC #{name}", node_config, 'm3.xlarge', aws_region, pxc_security_groups, aws_ips) { |aws, override| + aws.block_device_mapping = [ + { 'DeviceName' => "/dev/sdb", 'VirtualName' => "ephemeral0" }, + { 'DeviceName' => "/dev/sdc", 'VirtualName' => "ephemeral1" } + ] + provision_puppet( override, "pxc_server.pp" ) { |puppet| + puppet.facter = { + 'softraid' => true, + 'softraid_dev' => '/dev/md0', + 'softraid_level' => 'stripe', + 'softraid_devices' => '2', + 'softraid_dev_str' => '/dev/xvdb /dev/xvdc', + 'datadir_dev' => 'md0' + } + } + } + end + end +end diff --git a/Vagrantfile.pxc.rb b/Vagrantfile.pxc.rb index 4cc80d0..7c18ce9 100644 --- a/Vagrantfile.pxc.rb +++ b/Vagrantfile.pxc.rb @@ -15,92 +15,97 @@ # AWS configuration aws_region = "us-east-1" -aws_ips='private' # Use 'public' for cross-region AWS. 'private' otherwise (or commented out) +aws_ips = 'private' # Use 'public' for cross-region AWS. 'private' otherwise (or commented out) pxc_security_groups = [] cluster_address = 'gcomm://' + Array.new( pxc_nodes ){ |i| pxc_node_name_prefix + (i+1).to_s }.join(',') - Vagrant.configure("2") do |config| config.vm.box = "grypyrg/centos-x86_64" config.ssh.username = "vagrant" + + # Create the PXC nodes + (1..pxc_nodes).each do |i| + name = pxc_node_name_prefix + i.to_s + config.vm.define name do |node_config| + node_config.vm.hostname = name + node_config.vm.network :private_network, type: "dhcp" + node_config.vm.provision :hostmanager - # Create the PXC nodes - (1..pxc_nodes).each do |i| - name = pxc_node_name_prefix + i.to_s - config.vm.define name do |node_config| - node_config.vm.hostname = name - node_config.vm.network :private_network, type: "dhcp" - node_config.vm.provision :hostmanager - - # Provisioners - provision_puppet( node_config, "pxc_server.pp" ) { |puppet| - puppet.facter = { - # PXC setup - "percona_server_version" => pxc_version, - 'innodb_buffer_pool_size' => '128M', - 'innodb_log_file_size' => '64M', - 'innodb_flush_log_at_trx_commit' => '0', - 'pxc_bootstrap_node' => (i == 1 ? true : false ), - 'wsrep_cluster_address' => cluster_address, - 'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=128', - - # Sysbench setup - 'sysbench_load' => (i == 1 ? true : false ), - 'tables' => 1, - 'rows' => 100000, - 'threads' => 8, - - # PCT setup - 'percona_agent_api_key' => ENV['PERCONA_AGENT_API_KEY'] - } - } + # Provisioners + provision_puppet( node_config, "pxc_server.pp" ) { |puppet| + puppet.facter = { + # PXC setup + "percona_server_version" => pxc_version, + 'innodb_buffer_pool_size' => '128M', + 'innodb_log_file_size' => '64M', + 'innodb_flush_log_at_trx_commit' => '0', + 'pxc_bootstrap_node' => (i == 1 ? true : false ), + 'wsrep_cluster_address' => cluster_address, + 'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=128', - # Providers - provider_virtualbox( nil, node_config, 1024 ) { |vb, override| - provision_puppet( override, "pxc_server.pp" ) {|puppet| - puppet.facter = { - 'default_interface' => 'eth1', - - # PXC Setup - 'datadir_dev' => 'dm-2', - } - } - } - provider_vmware( name, node_config, 1024 ) { |vb, override| - provision_puppet( override, "pxc_server.pp" ) {|puppet| - puppet.facter = { - 'default_interface' => 'eth1', - - # PXC Setup - 'datadir_dev' => 'dm-2', - } - } - } - - provider_aws( name, node_config, 'm3.medium', aws_region, pxc_security_groups, aws_ips) { |aws, override| - aws.block_device_mapping = [ - { - 'DeviceName' => "/dev/sdl", - 'VirtualName' => "mysql_data", - 'Ebs.VolumeSize' => 20, - 'Ebs.DeleteOnTermination' => true, - } - ] - provision_puppet( override, "pxc_server.pp" ) {|puppet| puppet.facter = { 'datadir_dev' => 'xvdl' }} - } + # Sysbench setup on node 1 + 'sysbench_load' => (i == 1 ? true : false ), + 'tables' => 1, + 'rows' => 100000, + 'threads' => 8 + } + } - provider_openstack( name, node_config, 'm1.xlarge', nil, 'cc7e31d8-a4aa-4544-8a74-86dfd06655d7' ) { |os, override| - os.disks = [ - { "name" => "#{name}-data", "size" => 100, "description" => "MySQL Data"} - ] - provision_puppet( override, "pxc_server.pp" ) { |puppet| - puppet.facter = {'datadir_dev' => 'vdb'} - } - } + # Providers + provider_virtualbox( nil, node_config, 1024 ) { |vb, override| + provision_puppet( override, "pxc_server.pp" ) { |puppet| + puppet.facter = { + 'default_interface' => 'eth1', + + # PXC Setup + 'datadir_dev' => 'dm-2', + } + } + } - end - end - -end + provider_vmware( name, node_config, 1024 ) { |vb, override| + provision_puppet( override, "pxc_server.pp" ) { |puppet| + puppet.facter = { + 'default_interface' => 'eth1', + 'datadir_dev' => 'dm-2', + } + } + } + + provider_aws( name, node_config, 'm3.medium', aws_region, pxc_security_groups, aws_ips) { |aws, override| + aws.block_device_mapping = [ + { + 'DeviceName' => "/dev/sdl", + 'VirtualName' => "mysql_data", + 'Ebs.VolumeSize' => 20, + 'Ebs.DeleteOnTermination' => true, + } + ] + provision_puppet( override, "pxc_server.pp" ) { |puppet| + puppet.facter = { + 'datadir_dev' => 'xvdl' + } + } + } + # If you wish to use with OpenStack, you must previously have + # an OpenStack installation up and running and the + # vagrant-openstack plugin installed. Then, uncomment these lines. + + #provider_openstack( name, node_config, 'm1.xlarge', nil, 'cc7e31d8-a4aa-4544-8a74-86dfd06655d7' ) { |os, override| + # os.disks = [ + # { + # "name" => "#{name}-data", + # "size" => 100, + # "description" => "MySQL Data" + # } + # ] + # provision_puppet( override, "pxc_server.pp" ) { |puppet| + # puppet.facter = {'datadir_dev' => 'vdb'} + # } + #} + + end + end +end diff --git a/Vagrantfile.pxc_playground.rb b/Vagrantfile.pxc_playground.rb index 8a50ddb..bb4af64 100644 --- a/Vagrantfile.pxc_playground.rb +++ b/Vagrantfile.pxc_playground.rb @@ -89,11 +89,11 @@ node_config.vm.network :private_network, ip: '172.0.0.1' end - ssh_port = "882" + node_params["server_id"] - haproxy_port = "888" + node_params["server_id"] + ssh_port = "882" + node_params["server_id"] + haproxy_port = "888" + node_params["server_id"] - node_config.vm.network "forwarded_port", guest: 22, host: ssh_port, auto_correct: false - node_config.vm.network "forwarded_port", guest: 8080, host: haproxy_port, auto_correct: false + node_config.vm.network "forwarded_port", guest: 22, host: ssh_port, auto_correct: false + node_config.vm.network "forwarded_port", guest: 8080, host: haproxy_port, auto_correct: false # custom port forwarding node_config.vm.network "forwarded_port", guest: 8080, host: 8080, auto_correct: true @@ -103,27 +103,27 @@ provision_puppet( node_config, "pxc_playground.pp" ) { |puppet| puppet.facter = { - 'vagrant_hostname' => name, + 'vagrant_hostname' => name, "percona_server_version" => mysql_version, - "haproxy_servers" => serverlist, - "haproxy_disabled" => node_params['haproxy_disabled'], - "maxscale_disabled" => node_params['maxscale_disabled'], + "haproxy_servers" => serverlist, + "haproxy_disabled" => node_params['haproxy_disabled'], + "maxscale_disabled" => node_params['maxscale_disabled'], "haproxy_servers_primary" => pxc_nodes.select{|k,v| ! v.select{|k2,v2| k2=="haproxy_primary" && v2==true}.empty? }.map{|k3,v3| "#{k3}"}.join(','), - "maxscale_servers" => serverlist, - "cluster_servers" => serverlist, - "datadir_dev" => "dm-2", - 'datadir_fs' => "xfs", - 'percona_agent_enabled' => percona_agent_enabled, + "maxscale_servers" => serverlist, + "cluster_servers" => serverlist, + "datadir_dev" => "dm-2", + 'datadir_fs' => "xfs", + 'percona_agent_enabled' => percona_agent_enabled, 'percona_agent_api_key' => percona_agent_api_key, 'innodb_buffer_pool_size' => '128M', - 'innodb_log_file_size' => '64M', - 'innodb_flush_log_at_trx_commit' => '0', + 'innodb_log_file_size' => '64M', + 'innodb_flush_log_at_trx_commit' => '0', 'pxc_bootstrap_node' => node_params['pxc_bootstrap_node'], 'extra_mysqld_config' => 'wsrep_cluster_address=gcomm://' + pxc_nodes.map{|k,v| "#{k}"}.join(',') + "\n" + "wsrep_sst_receive_address=" + name + "\n" + "wsrep_node_address=" + name + "\n" + - "log_slave_updates\n" + + "log_slave_updates\n" + "server_id=" + node_params['server_id'] + "\n" + "log_bin" + "\n" } @@ -134,7 +134,6 @@ # 'wsrep_sst_receive_address=' + name + "\n" + # 'wsrep_node_address=' + name + "\n" + - # Providers provider_virtualbox( nil, node_config, 256) { |vb, override| provision_puppet( override, "pxc_playground.pp" ) {|puppet| diff --git a/lib/vagrant-common.rb b/lib/vagrant-common.rb index ee4be91..a3c1676 100644 --- a/lib/vagrant-common.rb +++ b/lib/vagrant-common.rb @@ -129,8 +129,7 @@ def provider_vmware ( name, config, ram = 256, cpus = 1 ) end end - -# Configure this node for Vmware +# Configure this node for Openstack def provider_openstack( name, config, flavor, security_groups = nil, network = nil, hostmanager_openstack_ips = nil ) require 'yaml' require 'vagrant-openstack-plugin' @@ -151,7 +150,6 @@ def provider_openstack( name, config, flavor, security_groups = nil, network = n os.keypair_name = os_config.fetch("keypair_name") override.ssh.private_key_path = os_config.fetch("private_key_path") - if security_groups != nil os.security_groups = security_groups end @@ -163,7 +161,6 @@ def provider_openstack( name, config, flavor, security_groups = nil, network = n os.floating_ip = :auto os.floating_ip_pool = "external-net" - if Vagrant.has_plugin?("vagrant-hostmanager") if hostmanager_openstack_ips == "private" or hostmanager_openstack_ips == nil awsrequest = "local-ipv4" @@ -194,20 +191,20 @@ def provider_openstack( name, config, flavor, security_groups = nil, network = n # -- config: vm config from Vagrantfile # -- manifest_file: puppet manifest to use (under puppet/manifests) def provision_puppet( config, manifest_file ) - config.vm.provision manifest_file, type:"puppet", preserve_order: true do |puppet| + config.vm.provision manifest_file, type:"puppet", preserve_order: true do |puppet| puppet.manifest_file = manifest_file - puppet.manifests_path = ["vm", "/vagrant/manifests"] - puppet.options = "--verbose --modulepath /vagrant/modules" - # puppet.options = "--verbose" - if block_given? - yield( puppet ) - end + puppet.manifests_path = ["vm", "/vagrant/manifests"] + puppet.options = "--verbose --modulepath /vagrant/modules" + # puppet.options = "--verbose" + if block_given? + yield( puppet ) + end - # Check if the hostname is a proper string (won't be if config is an override config) - # If string, then set the vagrant_hostname facter fact automatically so base::hostname works + # Check if the hostname is a proper string (won't be if config is an override config) + # If string, then set the vagrant_hostname facter fact automatically so base::hostname works if config.vm.hostname.is_a?(String) - puppet.facter["vagrant_hostname"] = config.vm.hostname - end + puppet.facter["vagrant_hostname"] = config.vm.hostname + end end end diff --git a/manifests/percona_server.pp b/manifests/percona_server.pp index 4d1d988..bae3b96 100755 --- a/manifests/percona_server.pp +++ b/manifests/percona_server.pp @@ -36,7 +36,6 @@ Class['percona::repository'] -> Class['percona::server'] -> Class['percona::config'] -> Class['percona::service'] -> Class['percona::server-password'] -> Class['test::user'] - Class['base::packages'] -> Class['misc::myq_gadgets'] Class['base::packages'] -> Class['misc::myq_tools'] @@ -46,6 +45,7 @@ Class['percona::repository'] -> Class['percona::toolkit'] Class['percona::repository'] -> Class['percona::sysbench'] +Class['percona::server'] -> Class['percona::sysbench'] Class['percona::server'] -> Class['percona::toolkit'] Class['percona::service'] -> Class['test::user'] @@ -58,8 +58,8 @@ threads => $threads, engine => $engine } - - Class['percona::server'] -> Class['percona::sysbench'] + + Class['percona::server'] -> Class['percona::sysbench'] Class['percona::sysbench'] -> Class['test::sysbench_load'] Class['test::user'] -> Class['test::sysbench_load'] } diff --git a/manifests/pxc_server.pp b/manifests/pxc_server.pp index f166357..88e36c5 100644 --- a/manifests/pxc_server.pp +++ b/manifests/pxc_server.pp @@ -29,7 +29,7 @@ datadir_fs_opts => $datadir_fs_opts, datadir_mkfs_opts => $datadir_mkfs_opts } - + Class['mysql::datadir'] -> Class['percona::cluster::server'] if $softraid == 'true' { @@ -56,6 +56,8 @@ Class['percona::repository'] -> Class['percona::toolkit'] Class['percona::repository'] -> Class['percona::sysbench'] +Class['percona::cluster::server'] -> Class['percona::sysbench'] + Class['percona::cluster::client'] -> Class['percona::toolkit'] Class['percona::cluster::service'] -> Class['test::user'] @@ -78,24 +80,24 @@ info( 'enabling consul agent' ) $config_hash = delete_undef_values( { - 'datacenter' => $datacenter, - 'data_dir' => '/opt/consul', - 'log_level' => 'INFO', - 'node_name' => $node_name ? { - undef => $vagrant_hostname, - default => $node_name - }, - 'bind_addr' => $default_interface ? { - undef => undef, - default => getvar("ipaddress_${default_interface}") - }, - 'client_addr' => '0.0.0.0', + 'datacenter' => $datacenter, + 'data_dir' => '/opt/consul', + 'log_level' => 'INFO', + 'node_name' => $node_name ? { + undef => $vagrant_hostname, + default => $node_name + }, + 'bind_addr' => $default_interface ? { + undef => undef, + default => getvar("ipaddress_${default_interface}") + }, + 'client_addr' => '0.0.0.0', }) class { 'consul': join_cluster => $join_cluster, - config_hash => $config_hash + config_hash => $config_hash } include consul::local_dns @@ -106,21 +108,16 @@ } -if ( $percona_agent_api_key ) { - include percona::agent - - Class['percona::cluster::service'] -> Class['percona::agent'] -} - if ( $vividcortex_api_key ) { class { 'misc::vividcortex': api_key => $vividcortex_api_key } - - Class['percona::cluster::service'] -> Class['misc::vividcortex'] + + Class['percona::cluster::service'] -> Class['misc::vividcortex'] } if $sysbench_skip_test_client != 'true' { - include test::sysbench_test_script + include test::sysbench_test_script + Class['percona::cluster::server'] -> Class['test::sysbench_test_script'] } diff --git a/modules/mysql/templates/my.cnf.erb b/modules/mysql/templates/my.cnf.erb index a24a4b6..8f20162 100644 --- a/modules/mysql/templates/my.cnf.erb +++ b/modules/mysql/templates/my.cnf.erb @@ -1,23 +1,23 @@ [mysqld] -datadir = /var/lib/mysql -log_error = error.log +datadir = /var/lib/mysql +log_error = error.log log-bin -server-id = <%= @server_id %> +server-id = <%= @server_id %> -query_cache_size=0 -query_cache_type=0 +query_cache_size = 0 +query_cache_type = 0 -innodb_buffer_pool_size = <%= @innodb_buffer_pool_size %> -innodb_log_file_size = <%= @innodb_log_file_size %> -innodb_flush_method = O_DIRECT +innodb_buffer_pool_size = <%= @innodb_buffer_pool_size %> +innodb_log_file_size = <%= @innodb_log_file_size %> +innodb_flush_method = O_DIRECT innodb_file_per_table innodb_flush_log_at_trx_commit = <%= @innodb_flush_log_at_trx_commit %> <%=@extra_mysqld_config %> [mysql] -prompt = "<%=@vagrant_hostname %> mysql> " +prompt = "<%=@vagrant_hostname %> mysql> " [client] -user = root +user = root diff --git a/modules/percona/manifests/cluster/service.pp b/modules/percona/manifests/cluster/service.pp index 14bcafa..0552c6c 100644 --- a/modules/percona/manifests/cluster/service.pp +++ b/modules/percona/manifests/cluster/service.pp @@ -1,43 +1,44 @@ class percona::cluster::service { # We bootstrap the bootstrap-ed node. - # This means that atm. when we do `vagrant provision` while that node - # MySQL is not running, it will rebootstrap and potentially create + # This means that when we provision happens on that node, + # MySQL will (re)start, bootstrap, and potentially create # a new cluster. This can have nasty consequences for your environment + # if you don't understand all the pieces. + if( $pxc_bootstrap_node == "true" or $pxc_bootstrap_node == true) { - # We do not use the redhat provider but the old fashioned init scripts - # by using the 'base' provider. this allows sending pxc-bootstrap as + # We do not use the redhat provider but the old fashioned init scripts + # by using the 'base' provider. this allows sending pxc-bootstrap as # command instead of 'start' - - if( $operatingsystem == 'centos' and $operatingsystemrelease =~ /^7/ ) { #7.0.1406 + + if( $operatingsystem == 'centos' and $operatingsystemrelease =~ /^7/ ) { service { "mysql": - enable => true, - ensure => 'running', + enable => true, + ensure => 'running', provider => 'base', start => "(test -f /var/lib/mysql/grastate.dat && systemctl start mysql) || systemctl start mysql@bootstrap", - require => Package['MySQL-server'], - subscribe => File["/etc/my.cnf"]; + require => Package['MySQL-server'], + subscribe => File["/etc/my.cnf"]; } } else { - service { "mysql": - enable => true, - ensure => 'running', + enable => true, + ensure => 'running', provider => 'base', status => "/etc/init.d/mysql status", start => "(test -f /var/lib/mysql/grastate.dat && /etc/init.d/mysql start) || /etc/init.d/mysql bootstrap-pxc", stop => "/etc/init.d/mysql stop", - require => Package['MySQL-server'], - subscribe => File["/etc/my.cnf"]; + require => Package['MySQL-server'], + subscribe => File["/etc/my.cnf"]; } } } else { service { 'mysql': - ensure => 'running', - subscribe => File["/etc/my.cnf"]; + ensure => 'running', + subscribe => File["/etc/my.cnf"]; } } } \ No newline at end of file diff --git a/modules/percona/manifests/config.pp b/modules/percona/manifests/config.pp index 4210eef..9f61492 100644 --- a/modules/percona/manifests/config.pp +++ b/modules/percona/manifests/config.pp @@ -20,10 +20,9 @@ $extra_mysqld_config = '' } - file { "/etc/my.cnf": - ensure => present, + ensure => file, content => template("percona/my.cnf.erb"), require => File['/etc/mysql.d']; "/etc/mysql.d": diff --git a/modules/percona/manifests/server-password.pp b/modules/percona/manifests/server-password.pp index ef3d267..524f0a2 100644 --- a/modules/percona/manifests/server-password.pp +++ b/modules/percona/manifests/server-password.pp @@ -3,7 +3,7 @@ class percona::server-password { if $percona_server_version == "57" or $percona_server_version == "-57" { exec {"remove57randompassword": - command => 'mysql -u root -p`cat /var/lib/mysql/error.log | grep "A temporary password is generated for root@localhost" | tail -n 1 | awk "{print \\$(NF)}"` --connect-expired-password -e "set password=\"\""', + command => 'mysql -u root -p`grep "A temporary password is generated for root@localhost" /var/lib/mysql/error.log | tail -n 1 | awk "{print \\$(NF)}"` --connect-expired-password -e "set password=\"\""', path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin", unless => "/usr/bin/mysqladmin ext", require => Service["mysql"] diff --git a/modules/percona/manifests/server.pp b/modules/percona/manifests/server.pp index 010cb85..cc41b89 100644 --- a/modules/percona/manifests/server.pp +++ b/modules/percona/manifests/server.pp @@ -20,10 +20,11 @@ case $operatingsystem { centos: { package { + "mariadb-libs": + ensure => purged; "Percona-Server-client-$percona_server_version.$hardwaremodel": alias => "MySQL-client", ensure => latest; - "Percona-Server-client-$other_percona_server_version.$hardwaremodel": before => Package["Percona-Server-client-$percona_server_version.$hardwaremodel"], require => Package["Percona-Server-server-$other_percona_server_version.$hardwaremodel"], @@ -32,7 +33,6 @@ before => Package["Percona-Server-client-$percona_server_version.$hardwaremodel"], require => Package["Percona-Server-server-$other_percona_server_version2.$hardwaremodel"], ensure => absent; - "Percona-Server-server-$percona_server_version.$hardwaremodel": alias => "MySQL-server", require => Package["MySQL-client"], diff --git a/modules/percona/templates/my.cnf.erb b/modules/percona/templates/my.cnf.erb index 5cfbc04..05f2e8e 100644 --- a/modules/percona/templates/my.cnf.erb +++ b/modules/percona/templates/my.cnf.erb @@ -1,31 +1,32 @@ [mysqld] -datadir = /var/lib/mysql -log_error = error.log +datadir = /var/lib/mysql +log_error = error.log socket = /var/lib/mysql/mysql.sock log-bin -server-id = <%= @server_id %> +server-id = <%= @server_id %> -query_cache_size=0 -query_cache_type=0 +query_cache_size = 0 +query_cache_type = 0 -innodb_buffer_pool_size = <%= @innodb_buffer_pool_size %> -innodb_log_file_size = <%= @innodb_log_file_size %> -innodb_flush_method = O_DIRECT +innodb_buffer_pool_size = <%= @innodb_buffer_pool_size %> +innodb_log_file_size = <%= @innodb_log_file_size %> +innodb_flush_method = O_DIRECT innodb_file_per_table innodb_flush_log_at_trx_commit = <%= @innodb_flush_log_at_trx_commit %> -performance-schema-consumer-events-statements-history=ON + +performance-schema-consumer-events-statements-history = ON <%=@extra_mysqld_config %> -loose-validate-password=OFF +loose-validate-password = OFF [mysql] -prompt = "<%=@vagrant_hostname %> mysql> " +prompt = "<%=@vagrant_hostname %> mysql> " [client] -user = root +user = root socket = /var/lib/mysql/mysql.sock !includedir /etc/mysql.d diff --git a/modules/test/manifests/sysbench_load.pp b/modules/test/manifests/sysbench_load.pp index 4624451..abd732a 100644 --- a/modules/test/manifests/sysbench_load.pp +++ b/modules/test/manifests/sysbench_load.pp @@ -11,9 +11,9 @@ cwd => '/root', creates => "/var/lib/mysql/$schema/"; 'prepare_database': - command => "sysbench --test=sysbench_tests/db/parallel_prepare.lua --db-driver=mysql --mysql-table-engine=$engine --mysql-user=root --mysql-db=$schema --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --num-threads=$threads run", + command => "sysbench --test=sysbench_tests/db/parallel_prepare.lua --db-driver=mysql --mysql-table-engine=$engine --mysql-user=root --mysql-db=$schema --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --max-requests=$threads --num-threads=$threads run", timeout => 0, # unlimited - logoutput => true, + logoutput => 'on_failure', path => ['/usr/bin', '/bin', '/usr/local/bin'], cwd => '/root', creates => "/var/lib/mysql/$schema/sbtest$tables.frm", @@ -21,4 +21,4 @@ } -} \ No newline at end of file +} diff --git a/modules/test/manifests/sysbench_test_script.pp b/modules/test/manifests/sysbench_test_script.pp index 5a23df5..ab4ed30 100644 --- a/modules/test/manifests/sysbench_test_script.pp +++ b/modules/test/manifests/sysbench_test_script.pp @@ -1,18 +1,18 @@ class test::sysbench_test_script { - if !$mysql_host { $mysql_host = 'localhost' } - if !$mysql_port { $mysql_port = '3306' } - if !$schema { $schema = 'sbtest' } - if !$tables { $tables = 1 } - if !$rows { $rows = 100000 } - if !$threads { $threads = 1 } - if !$tx_rate { $tx_rate = 0 } - if !$engine { $engine = 'innodb' } + if !$mysql_host { $mysql_host = 'localhost' } + if !$mysql_port { $mysql_port = '3306' } + if !$schema { $schema = 'sbtest' } + if !$tables { $tables = 1 } + if !$rows { $rows = 100000 } + if !$threads { $threads = 1 } + if !$tx_rate { $tx_rate = 0 } + if !$engine { $engine = 'innodb' } file { '/usr/local/bin/run_sysbench_reload.sh': ensure => present, - content => "sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-table-engine=$engine --mysql-user=test --mysql-password=test --mysql-db=$schema --mysql-host=$mysql_host --mysql-port=$mysql_port --oltp-tables-count=$tables cleanup -sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --mysql-table-engine=$engine --mysql-user=test --mysql-password=test --mysql-db=$schema --mysql-host=$mysql_host --mysql-port=$mysql_port --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --max-requests=1 run", + content => "sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-table-engine=$engine --mysql-user=test --mysql-password=test --mysql-db=$schema --mysql-host=$mysql_host --mysql-port=$mysql_port --oltp-tables-count=$tables cleanup +sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --mysql-table-engine=$engine --mysql-user=test --mysql-password=test --mysql-db=$schema --mysql-host=$mysql_host --mysql-port=$mysql_port --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --max-requests=1 run", mode => 0755; } @@ -22,19 +22,19 @@ content => "sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-user=test --mysql-password=test --mysql-db=$schema --mysql-host=$mysql_host --mysql-port=$mysql_port --mysql-ignore-errors=all --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --num-threads=$threads --report-interval=1 --max-requests=0 --tx-rate=$tx_rate run | grep tps", mode => 0755; } - + file { '/usr/local/bin/run_sysbench_update_index.sh': ensure => present, content => "sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/update_index.lua --mysql-user=test --mysql-password=test --mysql-db=$schema --mysql-host=$mysql_host --mysql-port=$mysql_port --mysql-ignore-errors=all --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --num-threads=$threads --report-interval=1 --max-requests=0 --tx-rate=$tx_rate run | grep tps", mode => 0755; } - + file { '/usr/local/bin/run_sysbench.sh': ensure => present, source => "puppet:///modules/test/run_sysbench.sh", - mode => 0755; + mode => 0755; '/var/lib/mysql/sbtest': ensure => directory, owner => 'mysql', @@ -42,19 +42,18 @@ mode => '0755', } - if $enable_consul == 'true' { - # Watch for a test in consul and trigger it when the appropriate key/value is set + # Watch for a test in consul and trigger it when the appropriate key/value is set consul::watch { - 'test': type => 'event', handler => 'wall test consul event'; - 'sysbench_stop': type => 'event', handler => 'killall sysbench'; - 'sysbench_oltp': type => 'event', handler => "pidof sysbench || /usr/local/bin/run_sysbench_oltp.sh"; - 'sysbench_update_index': type => 'event', handler => "pidof sysbench || /usr/local/bin/run_sysbench_update_index.sh"; + 'test': type => 'event', handler => 'wall test consul event'; + 'sysbench_stop': type => 'event', handler => 'killall sysbench'; + 'sysbench_oltp': type => 'event', handler => "pidof sysbench || /usr/local/bin/run_sysbench_oltp.sh"; + 'sysbench_update_index': type => 'event', handler => "pidof sysbench || /usr/local/bin/run_sysbench_update_index.sh"; } - + consul::service { - 'sysbench_running': checks => [{script => "killall -0 sysbench", interval => '10s'}]; - 'sysbench_ready': checks => [{script => "which sysbench", interval => '1m'}]; - } - } + 'sysbench_running': checks => [{script => "killall -0 sysbench", interval => '10s'}]; + 'sysbench_ready': checks => [{script => "which sysbench", interval => '1m'}]; + } + } } diff --git a/ms-setup.pl b/ms-setup.pl index f1c63fe..5d72178 100755 --- a/ms-setup.pl +++ b/ms-setup.pl @@ -18,7 +18,7 @@ # Harvest node ips foreach my $node( @running_nodes ) { my $nic = 'eth1'; - $nic = 'eth0' if ($node->{provider} eq 'aws' || $node->{provider} eq 'virtualbox'); + $nic = 'eth0' if $node->{provider} eq 'aws'; my $ip_str = `vagrant ssh $node->{name} -c "ip a l | grep $nic | grep inet"`; if( $ip_str =~ m/inet\s(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\// ) { @@ -32,7 +32,7 @@ my $master = shift @running_nodes; my $master_ip = $master->{ip}; -print "Master node will be: $master->{name}\n"; +print "Master node will be: $master->{name} ($master->{ip})\n"; # Get Master binlog file and position my $master_status =<{name}\n"; my $grant =<{name} -c \"$grant\""); # Configure the slaves -print <{name}\n"; + print "Executing CHANGE MASTER and START SLAVE on '$slave->{name}'\n"; system( "vagrant ssh $slave->{name} -c \"$change_master\""); }