Skip to content
This repository has been archived by the owner on Apr 27, 2022. It is now read-only.

Commit

Permalink
Merge pull request #25 from utdrmac/various-fixes
Browse files Browse the repository at this point in the history
Various fixes for M/S and PXC files
  • Loading branch information
grypyrg authored Jan 9, 2017
2 parents 6312496 + 7b68cff commit fb94fe5
Show file tree
Hide file tree
Showing 17 changed files with 376 additions and 339 deletions.
84 changes: 59 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ Principles/goals of this environment:
* Sample databases
* Misc: local repos for conference VMs,


## Walkthrough

This section should get you up and running.
Expand Down Expand Up @@ -95,7 +94,6 @@ instance_name_prefix: SOME_NAME_PREFIX
default_vpc_subnet_id: subnet-896602d0
```
#### Multi-region
AWS Multi-region can be supported by adding a 'regions' hash to the .aws_secrets file:
Expand All @@ -122,24 +120,29 @@ regions:
Note that the default 'keypair_name' and 'keypair_path' can still be used. Region will default to 'us-east-1' unless you specifically override it.
#### Boxes and multi-region
#### Boxes and Multiple AWS Regions
AMI's are region-specific. The AWS Vagrant boxes you use must include AMI's for each region in which you wish to deploy.
Note that the aws Vagrant boxes you use must include AMI's in each region. For example, see the regions listed here: https://vagrantcloud.com/grypyrg/centos-x86_64. Packer, which is used to build this box, can be configured to add more regions if desired, but it requires building a new box.
For an example, see the regions listed here: https://vagrantcloud.com/grypyrg/centos-x86_64.
#### VPC integration
Packer, which is used to build this box, can be configured to add more regions if desired, but it requires building a new box.
The latest versions of my grypyrg/centos-x86-64 boxes require VPC. Currently this software supports passing a vpc_subnet_id per instance in one of two ways:
#### AWS VPC Integration
1. Set the default_vpc_subnet_id in the ~/.aws_secrets file. This can either be global or per-region.
1. Pass a subnet_id into the provider_aws method in the vagrant-common.rb file.
The latest versions of grypyrg/centos-x86-64 boxes require a VPC since AWS now requires VPC for all instances.
As shown in the example above, you must set the `default_vpc_subnet_id` in the ~/.aws_secrets file. You can override this on a per-region basis.

You can also pass a `subnet_id` into the `provider_aws` method using an override in your Vagrantfile.

### Clone this repo

```bash
git clone <clone URL>
cd vagrant-percona
git submodule init; git submodule update
git submodule init
git submodule update --recursive
```

### Launch the box
Expand All @@ -156,42 +159,70 @@ vagrant ssh

When you create a lot of vagrant environments with vagrant-percona, creating/renaming those Vagrantfile files can get quite messy easily.

The repository contains a small script that allows you to create a new environment, which will build a new directory with the proper Vagrantfile files and links to the puppet code. If you're setting up a PXC environment, symlinks will also be provided to the necessary pxc-bootstrap.sh script.
The repository contains a small script that allows you to create a new environment, which will build a new directory with the proper Vagrantfile files and links to the puppet code.

This allows you to have many many Vagrant environments configured simultaneously.

```bash
vagrant-percona$ ./create-new-env.sh single_node ~/vagrant/percona-toolkit-ptosc-plugin-ptheartbeat
vagrant-percona$ ./create-new-env.sh single_node ~/vagrant/testing-issue-428
Creating 'single_node' Environment
percona-toolkit-ptosc-plugin-ptheartbeat gryp$ vagrant up --provider=aws
percona-toolkit-ptosc-plugin-ptheartbeat gryp$ vagrant ssh
```

## Cleanup

### Shutdown the vagrant instance(s)

```
vagrant destroy
vagrant-percona$ cd ~/vagrant/testing-issue-428
~/vagrant/testing-issue-428$ vagrant up --provider=aws
~/vagrant/testing-issue-428$ vagrant ssh
```

## Master/Slave

This Vagrantfile will launch 2 (or more; edit the file and uncomment proper build line) MySQL servers in either VirtualBox or AWS. Running the ms-setup.pl script will set the first instance to be the master and all remaining nodes to be async slaves.

```bash
ln -sf Vagrantfile.ms.rb Vagrantfile
vagrant up
vagrant up --provider [aws|virtualbox]
./ms-setup.pl
```

## PXC

This Vagrantfile will launch 3 Percona 5.7 XtraDB Cluster nodes in either VirtualBox or AWS. The InnoDB Buffer Pool is set to 128MB. The first node is automatically bootstrapped to form the cluster. The remaining 2 nodes will join the first to form the cluster.

Each Virtualbox instance is launched with 256MB of memory.

Each EC2 instance will use the `m3.medium` instance type, which has 3.75GB of RAM.

```bash
ln -sf Vagrantfile.pxc.rb Vagrantfile
vagrant up
./pxc-bootstrap.sh
```

__NOTE:__ Due to Vagrant being able to parallel build in AWS, there is no guarantee "node 1" will bootstrap before the other 2. If this happens, node 2 and node 3 will be unable to join the cluster. It is therfore recommended you launch node 1 manually, first, then launch the remaining nodes. _(This is not an issue with Virtualbox as parallel builds are not supported.)_

Example:

```bash
vagrant up node1 && sleep 5 && vagrant up
```

## PXC (Big)

This Vagrantfile will launch 3 Percona 5.7 XtraDB Cluster nodes in either VirtualBox or AWS. The InnoDB Buffer Pool is set to _12GB_.

__WARNING:__ This requires a virtual machine with 15GB of RAM. Most consumer laptops and desktops do not have the RAM requirements to run multiple nodes of this configuration.

Each EC2 instance will use the `m3.xlarge` instance type, which has 15GB of RAM.

```bash
ln -sf Vagrantfile.pxc-big.rb Vagrantfile
vagrant up
```

__NOTE:__ Due to Vagrant being able to parallel build in AWS, there is no guarantee "node 1" will bootstrap before the other 2. If this happens, node 2 and node 3 will be unable to join the cluster. It is therfore recommended you launch node 1 manually, first, then launch the remaining nodes. _(This is not an issue with Virtualbox as parallel builds are not supported.)_

Example:

```bash
vagrant up node1 && sleep 5 && vagrant up
```

## Using this repo to create benchmarks

Expand All @@ -210,7 +241,10 @@ vagrant up
...
```

# Future Stuff
## Cleanup

### Shutdown the vagrant instance(s)

* Virtualbox support
```
vagrant destroy
```
59 changes: 36 additions & 23 deletions Vagrantfile.ms.rb
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# To create multiple slaves, read the instructions near the end
# of this file.

require File.dirname(__FILE__) + '/lib/vagrant-common.rb'

def build_box( config, name, ip, server_id )
mysql_version = "56"
config.vm.define name do |node_config|
node_config.vm.hostname = name
node_config.vm.network :private_network, ip: ip

mysql_version = "57"

config.vm.define name do |node_config|
node_config.vm.hostname = name
node_config.vm.network :private_network, ip: ip, adaptor: 1, auto_config: false
node_config.vm.provision :hostmanager

# Provisioners
provision_puppet( node_config, "base.pp" )
provision_puppet( node_config, "percona_server.pp" ) { |puppet|
Expand All @@ -17,7 +22,7 @@ def build_box( config, name, ip, server_id )
"percona_server_version" => mysql_version,
"innodb_buffer_pool_size" => "128M",
"innodb_log_file_size" => "64M",
"server_id" => server_id
"server_id" => server_id
}
}
provision_puppet( node_config, "percona_client.pp" ) { |puppet|
Expand All @@ -28,26 +33,30 @@ def build_box( config, name, ip, server_id )

# Providers
provider_virtualbox( nil, node_config, 256 ) { |vb, override|
vb.linked_clone = true
provision_puppet( override, "percona_server.pp" ) {|puppet|
puppet.facter = {"datadir_dev" => "dm-2"}
puppet.facter = {
"default_interface" => "eth1",
"datadir_dev" => "dm-2"
}
}
}
provider_aws( name, node_config, 'm3.medium') { |aws, override|

provider_aws( name, node_config, 'm3.medium') { |aws, override|
aws.block_device_mapping = [
{
'DeviceName' => "/dev/sdl",
'VirtualName' => "mysql_data",
'Ebs.VolumeSize' => 20,
'Ebs.DeleteOnTermination' => true,
}
{
'DeviceName' => "/dev/sdl",
'VirtualName' => "mysql_data",
'Ebs.VolumeSize' => 20,
'Ebs.DeleteOnTermination' => true,
}
]
provision_puppet( override, "percona_server.pp" ) {|puppet|
puppet.facter = {"datadir_dev" => "xvdl"}
}
}
}
end

if block_given?
yield
end
Expand All @@ -56,10 +65,14 @@ def build_box( config, name, ip, server_id )


Vagrant.configure("2") do |config|
config.vm.box = "grypyrg/centos-x86_64"
config.ssh.username = "vagrant"

build_box( config, 'master', '192.168.70.2', '1' )
build_box( config, 'slave', '192.168.70.3', '2' )
end
config.vm.box = "grypyrg/centos-x86_64"
config.ssh.username = "vagrant"

build_box( config, 'master', '192.168.70.2', '1' )
build_box( config, 'slave1', '192.168.70.3', '2' )

# Uncomment the line below to build a 3rd slave. You can add more
# lines like this one to have more nodes. Be sure to adjust the
# parameters to prevent duplicates.
#build_box( config, 'slave2', '192.168.70.4', '3' )
end
Loading

0 comments on commit fb94fe5

Please sign in to comment.