Skip to content

Commit

Permalink
inital commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Russell Endicott committed Nov 23, 2020
0 parents commit 6e5b119
Show file tree
Hide file tree
Showing 6 changed files with 350 additions and 0 deletions.
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
*.csv
*.txt
notes
*.log.json
output*/
panner
go.sum
43 changes: 43 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
version := 0.3
packageNameNix := panner-linux-amd64-$(version).tar.gz
packageNameMac := panner-darwin-amd64-$(version).tar.gz
packageNameWindows := panner-windows-amd64-$(version).tar.gz

build_dir := output
build_dir_linux := output-linux
build_dir_mac := output-mac
build_dir_windows := output-windows

build: format configure build-linux build-mac build-windows

build_m: format configure build-mac
cp ./$(build_dir_mac)/panner ./

format:
go fmt ./...


configure:
mkdir -p $(build_dir)
mkdir -p $(build_dir_linux)
mkdir -p $(build_dir_mac)
mkdir -p $(build_dir_windows)


build-linux:
env GOOS=linux GOARCH=amd64 go build -o ./$(build_dir_linux)/panner -ldflags "-X main.version=$(version)"
@cd ./$(build_dir_linux) && tar zcf ../$(build_dir)/$(packageNameNix) .

build-mac:
env GOOS=darwin GOARCH=amd64 go build -o ./$(build_dir_mac)/panner -ldflags "-X main.version=$(version)"
@cd ./$(build_dir_mac) && tar zcf ../$(build_dir)/$(packageNameMac) .

build-windows:
env GOOS=windows GOARCH=amd64 go build -o ./$(build_dir_windows)/panner.exe -ldflags "-X main.version=$(version)"
@cd ./$(build_dir_windows) && tar zcf ../$(build_dir)/$(packageNameWindows) .

clean:
rm -rf $(build_dir)
rm -rf $(build_dir_linux)
rm -rf $(build_dir_mac)
rm -rf $(build_dir_windows)
75 changes: 75 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# panner
Pan for EBS snapshot gold and maybe you'll get rich.

Analyzes the EBS snapshots in an account and determines how much money you could save by deleting snapshots that are missing their volumes.

* Supports date filtering of snapshots
* determines whether snapshots have current volumes and outputs all to file
* outputs snapshots aggregated by common volume ID into a separate file so minimum estimated cost savings can be determined based on volume size.

This executable is powered by [dustcollector](https://github.com/GESkunkworks/dustcollector).

# Overview
EBS snapshots are billed based on GB-month rate. So if you have a 500GB volume with a single snapshot and the rate is $0.05 per GB-month then that snapshot will cost $25/month to store. Snapshots after the initial snapshot are only the difference of what's changed since the last snapshot. So if you snapshot the volume again and you changed 50GB worth of data then you are billed for 550 GB-month of data so your bill would increase to $27.50 the next month.

If an EBS volume is deleted then the snapshot will remain in case you want to restore the volume at some point in the future. However, most of the time people just simply forget to delete snapshots when they terminate infrastructure or they intend to keep the snapshot for only a few months but then forget about it. This can leave snapshots laying around for years and the costs can add up.

This tool is designed to help find that snapshot volume so you can clean it up and save some money.

## Installation
Download a release from the releases tab for your OS architecture and unzip to a folder then execute the binary from command line.

Alternatively if you have a Golang dev environment you can build locally with `make build`.

## Usage
The following command will analyze the EBS snapshots in the `digital-public-cloudops` account and output the snapshot info and cost results to two files `digital-public-cloudops-snapshots.csv` and `digital-public-cloudops-bars.csv` respectively. It will only look at snapshots that were created before `2019-01-01`. It will paginate snapshot results up to 20 pages with a page size of 750 (the max is 1000).

```
$ ACCOUNT=digital-public-cloudops bash -c './panner -max-pages 20 -pagesize 750 -datefilter 2019-01-01 -profile $ACCOUNT -outfile-snapshots $ACCOUNT-snapshots.csv -outfile-bars $ACCOUNT-bars.csv -outfile-summary $ACCOUNT-summary.txt'
```

sample output:
```
t=2020-08-13T01:34:10-0400 lvl=info msg="Starting panner"
t=2020-08-13T01:34:10-0400 lvl=info msg="starting session" profile=digital-public-cloudops
t=2020-08-13T01:34:11-0400 lvl=info msg="Filtered snapshots page by date" pre-filter=13 post-filter=10 pageNum=1
t=2020-08-13T01:34:11-0400 lvl=info msg="searching for batch of volumes" size=8
t=2020-08-13T01:34:11-0400 lvl=info msg="Waiting for describeVolume batches to finish"
t=2020-08-13T01:34:14-0400 lvl=info msg="Total snapshots post date filter" snapshots_in_scope=10
t=2020-08-13T01:34:14-0400 lvl=info msg="Total snapshots analyzed" total-analyzed=13
t=2020-08-13T01:34:14-0400 lvl=info msg="grabbing all latest launch template versions"
t=2020-08-13T01:34:15-0400 lvl=info msg="Writing snapshots to file"
t=2020-08-13T01:34:15-0400 lvl=info msg="wrote nuggets to file" filename=digital-public-cloudops-snapshots.csv
t=2020-08-13T01:34:15-0400 lvl=info msg="Writing cost info to file"
t=2020-08-13T01:34:15-0400 lvl=info msg="wrote bars to file" filename=digital-public-cloudops-bars.csv
t=2020-08-13T01:34:15-0400 lvl=info msg="wrote summary to file" filename=digital-public-cloudops-summary.txt
```

From there you can look at the summary file and delete the snapshots if you want to realize the savings.

Summary
```
After analyzing the account we can see that there are 5 snapshots that can be deleted because they were created before 2019-01-01 and are not used in any AutoScaling group or AMI sharing capacity. However, before these snapshots can be deleted several other resources need to be deleted first. Below you can find the ordered deletion plan:
Some of the snapshots we need to delete are currently registered as AMIs or used in Launch Templates/Configs. However we've detected that those AMI's and Launch Templates/Configs are not used in any autoscaling group. This doesn't mean they're not being used by someone (e.g., referenced in a cloudformation template). You should be safe to delete them but you should always check to be sure
If you feel comfortable then here's the plan:
Delete the following LaunchTemplates first:
test-lt
then delete the following LaunchConfigurations:
test-snap-lc
then delete the following AMIs:
ami-a7ce9bdd
ami-6cee4b16
then finally delete the following Snapshots:
snap-092ab265885243a2d
snap-005ccdfd0fedb77b6
snap-06e70bf98b9e43b2f
snap-0a4795e305f1bc40d
snap-07a4f8539c10e0dc7
3 snapshots were spared because their EBS volume still exists
1 snapshots were spared because they were associated with an autoscaling group, were shared directly to another account, or were registered as an AMI that was shared to another account.
Total size of eligible for deletion is 40 GB. At a per GB-month rate of $0.050000 there is a potential savings of $2.000000
```
68 changes: 68 additions & 0 deletions common.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
package main

func containsStringPointer(strSlice []*string, searchStr *string) bool {
for _, value := range strSlice {
if *value == *searchStr {
return true
}
}
return false
}

func containsString(strSlice []string, searchStr string) bool {
for _, value := range strSlice {
if value == searchStr {
return true
}
}
return false
}

func dedupeStringPointer(strSlice []*string) []*string {
var returnSlice []*string
for _, value := range strSlice {
if !containsStringPointer(returnSlice, value) {
returnSlice = append(returnSlice, value)
}
}
return returnSlice
}

func dedupeString(strSlice []string) []string {
var returnSlice []string
for _, value := range strSlice {
if !containsString(returnSlice, value) {
returnSlice = append(returnSlice, value)
}
}
return returnSlice
}

// makeBatchesStringPointer takes a slice of string pointers and returns them as
// a slice of string pointer slices in batch size of batchSize. Useful for splitting
// up work into batches for parallel operations.
func makeBatchesStringPointer(strSlice []*string, batchSize int) (batches [][]*string) {
numBatches, remainder := len(strSlice)/batchSize, len(strSlice)%batchSize
// build full batches
for i := 1; i <= numBatches; i++ {
var startIndex int
endIndex := i * batchSize
if i == 1 {
startIndex = 0
} else {
startIndex = batchSize * (i - 1)
}
var b []*string
b = strSlice[startIndex:endIndex]
batches = append(batches, b)
}
if remainder > 0 {
// build last partial batch
startIndex := (len(strSlice) - remainder)
endIndex := len(strSlice)
var b []*string
b = strSlice[startIndex:endIndex]
batches = append(batches, b)
}
return batches
}
10 changes: 10 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
module panner

go 1.13

require (
github.com/GESkunkworks/dustcollector v0.0.1
github.com/aws/aws-sdk-go v1.34.3
github.com/go-stack/stack v1.8.0 // indirect
github.com/inconshreveable/log15 v0.0.0-20200109203555-b30bc20e4fd1
)
147 changes: 147 additions & 0 deletions panner.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
package main

import (
"flag"
"fmt"
"os"

"github.com/GESkunkworks/dustcollector"

"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/inconshreveable/log15"
)

// global version
var version string

// loggo is the global logger
var loggo log15.Logger

// setLogger sets up logging globally for the packages involved
func setLogger(noLogFile bool, logFileS, loglevel string) {
loggo = log15.New()
if noLogFile && loglevel == "debug" {
loggo.SetHandler(
log15.LvlFilterHandler(
log15.LvlDebug,
log15.StreamHandler(os.Stdout, log15.LogfmtFormat())))
} else if noLogFile {
loggo.SetHandler(
log15.LvlFilterHandler(
log15.LvlInfo,
log15.StreamHandler(os.Stdout, log15.LogfmtFormat())))
} else if loglevel == "debug" {
// log to stdout and file
loggo.SetHandler(log15.MultiHandler(
log15.StreamHandler(os.Stdout, log15.LogfmtFormat()),
log15.LvlFilterHandler(
log15.LvlDebug,
log15.Must.FileHandler(logFileS, log15.JsonFormat()))))
} else {
// log to stdout and file
loggo.SetHandler(log15.MultiHandler(
log15.LvlFilterHandler(
log15.LvlInfo,
log15.StreamHandler(os.Stdout, log15.LogfmtFormat())),
log15.LvlFilterHandler(
log15.LvlInfo,
log15.Must.FileHandler(logFileS, log15.JsonFormat()))))
}
}

// generic error handler for easy typing
func errorhandle(err error) {
if err != nil {
loggo.Error(err.Error())
os.Exit(1)
}
}

func main() {
var profile string
var region string
var dateFilter string
var logFile string
var logLevel string
var outFileSnap string
var outFileSummary string
var outFileBars string
var noLogFile, versionFlag, showSummary bool
var pageSize, maxPages, volBatchSize int
var ebsSnapRate float64
flag.StringVar(&profile, "profile", "default", "AWS session credentials profile")
flag.StringVar(&region, "region", "us-east-1", "AWS Region")
flag.StringVar(&logFile, "logfile", "panner.log.json", "JSON logfile location")
flag.StringVar(&logLevel, "loglevel", "info", "Log level (info or debug)")
flag.StringVar(&dateFilter, "datefilter", "2018-01-01",
"only analyze snapshots created before this date")
flag.StringVar(&outFileSnap, "outfile-snapshots", "out-snap.csv",
"filename of csv output file that contains all snapshots that meet dateFilter criteria")
flag.StringVar(&outFileBars, "outfile-bars", "out-bars.csv",
"filename of csv output file that contains snapshots "+
"aggregated by common volume (useful in determining "+
"potential cost savings)")
flag.StringVar(&outFileSummary, "outfile-summary", "out-summary.txt",
"filename of text output file that shows summary of action plan for this account.")
flag.BoolVar(&showSummary, "show-summary", false,
"when set summary is output to stdout as well as being written to --outfile-summary")
flag.IntVar(&maxPages, "max-pages", 5,
"maximum number of pages to pull during describe snapshots call")
flag.IntVar(&volBatchSize, "describe-volumes-batch-size", 20,
"make this larger if you hit throttling limits. "+
"Make this smaller if you want to speed up the program.")
flag.IntVar(&pageSize, "pagesize", 500, "number of snapshots to pull per page")
flag.BoolVar(&versionFlag, "v", false, "print version and exit")
flag.Float64Var(&ebsSnapRate, "ebs-snap-rate", 0.05,
"per GB-month cost for EBS snapshot (used in analysis summary at end of script)")
flag.BoolVar(&noLogFile, "nologfile", false,
"Indicates whether or not to skip writing of a filesystem log file.")
flag.Parse()
if versionFlag {
fmt.Printf("panner %s\n", version)
os.Exit(0)
}
setLogger(noLogFile, logFile, logLevel)
loggo.Info("Starting panner")
var sess *session.Session
loggo.Info("starting session", "profile", profile)
sess = session.Must(session.NewSessionWithOptions(session.Options{
Config: aws.Config{Region: aws.String(region)},
Profile: profile,
}))
einput := dustcollector.ExpeditionInput{
Session: sess,
MaxPages: &maxPages,
PageSize: &pageSize,
VolumeBatchSize: &volBatchSize,
DateFilter: &dateFilter,
Logger: &loggo,
OutfileRecommendations: &outFileSummary,
OutfileNuggets: &outFileSnap,
OutfileBars: &outFileBars,
EbsSnapRate: &ebsSnapRate,
}
exp, err := dustcollector.New(&einput)
errorhandle(err)
err = exp.Start()
if err != nil {
loggo.Error("error running expedition", "error", err.Error())
os.Exit(1)
}
errorhandle(err)
loggo.Info("Writing snapshots to file")
err = exp.ExportNuggets()
errorhandle(err)
loggo.Info("Writing cost info to file")
err = exp.ExportBars()
errorhandle(err)
// now show/export action plan
if showSummary {
for _, line := range exp.GetRecommendations() {
fmt.Println(line)
}
}
err = exp.ExportRecommendations()
errorhandle(err)
}

0 comments on commit 6e5b119

Please sign in to comment.