Skip to content
This repository has been archived by the owner on Apr 27, 2022. It is now read-only.

Commit

Permalink
Support for pxc-big. Formatting consistencies. README updates
Browse files Browse the repository at this point in the history
  • Loading branch information
utdrmac committed Jan 8, 2017
1 parent 389f918 commit 7f250fc
Show file tree
Hide file tree
Showing 5 changed files with 126 additions and 102 deletions.
35 changes: 34 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,13 +184,46 @@ vagrant up --provider [aws|virtualbox]

## PXC

This Vagrantfile will launch 3 Percona XtraDB Cluster nodes in either VirtualBox or AWS. The first node is automatically bootstrapped to form the cluster and the remaining 2 nodes will join.
This Vagrantfile will launch 3 Percona 5.7 XtraDB Cluster nodes in either VirtualBox or AWS. The InnoDB Buffer Pool is set to 128MB. The first node is automatically bootstrapped to form the cluster. The remaining 2 nodes will join the first to form the cluster.

Each Virtualbox instance is launched with 256MB of memory.

Each EC2 instance will use the `m3.medium` instance type, which has 3.75GB of RAM.

```bash
ln -sf Vagrantfile.pxc.rb Vagrantfile
vagrant up
```

_NOTE:_ Due to Vagrant being able to parallel build in AWS, there is no guarantee "node 1" will bootstrap before the other 2. If this happens, node 2 and node 3 will be unable to join the cluster. It is therfore recommended you launch node 1 manually, first, then launch the remaining nodes. _(This is not an issue with Virtualbox as parallel builds are not supported.)_

Example:

```bash
vagrant up node1 && sleep 5 && vagrant up
```

## PXC (Big)

This Vagrantfile will launch 3 Percona 5.7 XtraDB Cluster nodes in either VirtualBox or AWS. The InnoDB Buffer Pool is set to _12GB_.

_WARNING:_ This requires a virtual machine with 15GB of RAM. Most consumer laptops and desktops do not have the RAM requirements to run multiple nodes of this configuration.

Each EC2 instance will use the `m3.xlarge` instance type, which has 15GB of RAM.

```bash
ln -sf Vagrantfile.pxc-big.rb Vagrantfile
vagrant up
```

_NOTE:_ Due to Vagrant being able to parallel build in AWS, there is no guarantee "node 1" will bootstrap before the other 2. If this happens, node 2 and node 3 will be unable to join the cluster. It is therfore recommended you launch node 1 manually, first, then launch the remaining nodes. _(This is not an issue with Virtualbox as parallel builds are not supported.)_

Example:

```bash
vagrant up node1 && sleep 5 && vagrant up
```

## Using this repo to create benchmarks

I use a system where I define this repo as a submodule in a test-specific git repo and do all the customization for the test there.
Expand Down
139 changes: 65 additions & 74 deletions Vagrantfile.pxc-big.rb
Original file line number Diff line number Diff line change
Expand Up @@ -15,88 +15,79 @@

# AWS configuration
aws_region = "us-west-1"
aws_ips='private' # Use 'public' for cross-region AWS. 'private' otherwise (or commented out)
pxc_security_groups = ['default','pxc']
aws_ips='private' # Use 'public' for cross-region AWS. 'private' otherwise (or commented out)
pxc_security_groups = ['sg-b4438ad3']

cluster_address = 'gcomm://' + Array.new( pxc_nodes ){ |i| pxc_node_name_prefix + (i+1).to_s }.join(',')

Vagrant.configure("2") do |config|
config.vm.box = "grypyrg/centos-x86_64"
config.ssh.username = "vagrant"

# Create the PXC nodes
(1..pxc_nodes).each do |i|
name = pxc_node_name_prefix + i.to_s
config.vm.define name do |node_config|
node_config.vm.hostname = name
node_config.vm.network :private_network, type: "dhcp"
node_config.vm.provision :hostmanager

# Provisioners
provision_puppet( node_config, "pxc_server.pp" ) { |puppet|
puppet.facter = {
# PXC setup
"percona_server_version" => pxc_version,
'innodb_buffer_pool_size' => '12G',
'innodb_log_file_size' => '1G',
'innodb_flush_log_at_trx_commit' => '0',
'pxc_bootstrap_node' => (i == 1 ? true : false ),
'wsrep_cluster_address' => cluster_address,
'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=1024; evs.user_send_window=512; evs.send_window=512',
'wsrep_slave_threads' => 8,

# Sysbench setup
'sysbench_load' => (i == 1 ? true : false ),
'tables' => 20,
'rows' => 1000000,
'threads' => 8,
# 'tx_rate' => 10,

# PCT setup
'percona_agent_api_key' => ENV['PERCONA_AGENT_API_KEY']
}
}
# Create the PXC nodes
(1..pxc_nodes).each do |i|
name = pxc_node_name_prefix + i.to_s
config.vm.define name do |node_config|
node_config.vm.hostname = name
node_config.vm.network :private_network, type: "dhcp"
node_config.vm.provision :hostmanager

# Providers
provider_virtualbox( nil, node_config, 256 ) { |vb, override|
provision_puppet( override, "pxc_server.pp" ) {|puppet|
puppet.facter = {
'default_interface' => 'eth1',

# PXC Setup
'datadir_dev' => 'dm-2',
}
}
}
provider_vmware( name, node_config, 256 ) { |vb, override|
provision_puppet( override, "pxc_server.pp" ) {|puppet|
puppet.facter = {
'default_interface' => 'eth1',

# PXC Setup
'datadir_dev' => 'dm-2',
}
}
}

provider_aws( "PXC #{name}", node_config, 'm3.xlarge', aws_region, pxc_security_groups, aws_ips) { |aws, override|
aws.block_device_mapping = [
{ 'DeviceName' => "/dev/sdb", 'VirtualName' => "ephemeral0" },
{ 'DeviceName' => "/dev/sdc", 'VirtualName' => "ephemeral1" }
]
provision_puppet( override, "pxc_server.pp" ) {|puppet| puppet.facter = {
'softraid' => true,
'softraid_dev' => '/dev/md0',
'softraid_level' => 'stripe',
'softraid_devices' => '2',
'softraid_dev_str' => '/dev/xvdb /dev/xvdc',
# Provisioners
provision_puppet( node_config, "pxc_server.pp" ) { |puppet|
puppet.facter = {
# PXC setup
"percona_server_version" => pxc_version,
'innodb_buffer_pool_size' => '12G',
'innodb_log_file_size' => '1G',
'innodb_flush_log_at_trx_commit' => '0',
'pxc_bootstrap_node' => (i == 1 ? true : false ),
'wsrep_cluster_address' => cluster_address,
'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=1024; evs.user_send_window=512; evs.send_window=512',
'wsrep_slave_threads' => 8,

'datadir_dev' => 'md0'
}}
}
# Sysbench setup on node 1
'sysbench_load' => (i == 1 ? true : false ),
'tables' => 20,
'rows' => 1000000,
'threads' => 8
}
}

end
end

end
# Providers
provider_virtualbox( nil, node_config, 18432 ) { |vb, override|
provision_puppet( override, "pxc_server.pp" ) { |puppet|
puppet.facter = {
'default_interface' => 'eth1',
'datadir_dev' => 'dm-2',
}
}
}

provider_vmware( name, node_config, 18432 ) { |vb, override|
provision_puppet( override, "pxc_server.pp" ) { |puppet|
puppet.facter = {
'default_interface' => 'eth1',
'datadir_dev' => 'dm-2',
}
}
}

provider_aws( "PXC #{name}", node_config, 'm3.xlarge', aws_region, pxc_security_groups, aws_ips) { |aws, override|
aws.block_device_mapping = [
{ 'DeviceName' => "/dev/sdb", 'VirtualName' => "ephemeral0" },
{ 'DeviceName' => "/dev/sdc", 'VirtualName' => "ephemeral1" }
]
provision_puppet( override, "pxc_server.pp" ) { |puppet|
puppet.facter = {
'softraid' => true,
'softraid_dev' => '/dev/md0',
'softraid_level' => 'stripe',
'softraid_devices' => '2',
'softraid_dev_str' => '/dev/xvdb /dev/xvdc',
'datadir_dev' => 'md0'
}
}
}
end
end
end
23 changes: 13 additions & 10 deletions Vagrantfile.pxc.rb
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
node_config.vm.hostname = name
node_config.vm.network :private_network, type: "dhcp"
node_config.vm.provision :hostmanager

# Provisioners
provision_puppet( node_config, "pxc_server.pp" ) { |puppet|
puppet.facter = {
Expand All @@ -43,18 +43,18 @@
'pxc_bootstrap_node' => (i == 1 ? true : false ),
'wsrep_cluster_address' => cluster_address,
'wsrep_provider_options' => 'gcache.size=128M; gcs.fc_limit=128',

# Sysbench setup on node 1
'sysbench_load' => (i == 1 ? true : false ),
'tables' => 1,
'rows' => 100000,
'threads' => 8,
'threads' => 8
}
}

# Providers
provider_virtualbox( nil, node_config, 1024 ) { |vb, override|
provision_puppet( override, "pxc_server.pp" ) {|puppet|
provision_puppet( override, "pxc_server.pp" ) { |puppet|
puppet.facter = {
'default_interface' => 'eth1',

Expand All @@ -63,16 +63,16 @@
}
}
}

provider_vmware( name, node_config, 1024 ) { |vb, override|
provision_puppet( override, "pxc_server.pp" ) {|puppet|
provision_puppet( override, "pxc_server.pp" ) { |puppet|
puppet.facter = {
'default_interface' => 'eth1',
'datadir_dev' => 'dm-2',
}
}
}

provider_aws( name, node_config, 'm3.medium', aws_region, pxc_security_groups, aws_ips) { |aws, override|
aws.block_device_mapping = [
{
Expand All @@ -82,7 +82,11 @@
'Ebs.DeleteOnTermination' => true,
}
]
provision_puppet( override, "pxc_server.pp" ) {|puppet| puppet.facter = { 'datadir_dev' => 'xvdl' }}
provision_puppet( override, "pxc_server.pp" ) { |puppet|
puppet.facter = {
'datadir_dev' => 'xvdl'
}
}
}

# If you wish to use with OpenStack, you must previously have
Expand All @@ -105,4 +109,3 @@
end
end
end

27 changes: 12 additions & 15 deletions lib/vagrant-common.rb
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,7 @@ def provider_vmware ( name, config, ram = 256, cpus = 1 )
end
end


# Configure this node for Vmware
# Configure this node for Openstack
def provider_openstack( name, config, flavor, security_groups = nil, network = nil, hostmanager_openstack_ips = nil )
require 'yaml'
require 'vagrant-openstack-plugin'
Expand All @@ -151,7 +150,6 @@ def provider_openstack( name, config, flavor, security_groups = nil, network = n
os.keypair_name = os_config.fetch("keypair_name")
override.ssh.private_key_path = os_config.fetch("private_key_path")


if security_groups != nil
os.security_groups = security_groups
end
Expand All @@ -163,7 +161,6 @@ def provider_openstack( name, config, flavor, security_groups = nil, network = n
os.floating_ip = :auto
os.floating_ip_pool = "external-net"


if Vagrant.has_plugin?("vagrant-hostmanager")
if hostmanager_openstack_ips == "private" or hostmanager_openstack_ips == nil
awsrequest = "local-ipv4"
Expand Down Expand Up @@ -194,20 +191,20 @@ def provider_openstack( name, config, flavor, security_groups = nil, network = n
# -- config: vm config from Vagrantfile
# -- manifest_file: puppet manifest to use (under puppet/manifests)
def provision_puppet( config, manifest_file )
config.vm.provision manifest_file, type:"puppet", preserve_order: true do |puppet|
config.vm.provision manifest_file, type:"puppet", preserve_order: true do |puppet|
puppet.manifest_file = manifest_file
puppet.manifests_path = ["vm", "/vagrant/manifests"]
puppet.options = "--verbose --modulepath /vagrant/modules"
# puppet.options = "--verbose"
if block_given?
yield( puppet )
end
puppet.manifests_path = ["vm", "/vagrant/manifests"]
puppet.options = "--verbose --modulepath /vagrant/modules"
# puppet.options = "--verbose"
if block_given?
yield( puppet )
end

# Check if the hostname is a proper string (won't be if config is an override config)
# If string, then set the vagrant_hostname facter fact automatically so base::hostname works
# Check if the hostname is a proper string (won't be if config is an override config)
# If string, then set the vagrant_hostname facter fact automatically so base::hostname works
if config.vm.hostname.is_a?(String)
puppet.facter["vagrant_hostname"] = config.vm.hostname
end
puppet.facter["vagrant_hostname"] = config.vm.hostname
end

end
end
4 changes: 2 additions & 2 deletions modules/test/manifests/sysbench_load.pp
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
cwd => '/root',
creates => "/var/lib/mysql/$schema/";
'prepare_database':
command => "sysbench --test=sysbench_tests/db/parallel_prepare.lua --db-driver=mysql --mysql-table-engine=$engine --mysql-user=root --mysql-db=$schema --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --num-threads=$threads run",
command => "sysbench --test=sysbench_tests/db/parallel_prepare.lua --db-driver=mysql --mysql-table-engine=$engine --mysql-user=root --mysql-db=$schema --oltp-tables-count=$tables --oltp-table-size=$rows --oltp-auto-inc=off --max-requests=$threads --num-threads=$threads run",
timeout => 0, # unlimited
logoutput => 'on_failure',
path => ['/usr/bin', '/bin', '/usr/local/bin'],
Expand All @@ -21,4 +21,4 @@
}


}
}

0 comments on commit 7f250fc

Please sign in to comment.