diff --git a/CHANGES.md b/CHANGES.md
index 4508fb38..8c1678a6 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -1,5 +1,10 @@
 # jHDF Change Log
 
+## v0.6.7
+- Add support for Bitshuffle filter https://github.com/jamesmudd/jhdf/issues/366
+- Add ability to get filter data `Dataset#getFilters();` https://github.com/jamesmudd/jhdf/issues/378
+- Dependency and CI updates
+
 ## v0.6.6
 - Add support for slicing of contiguous datasets. This adds a new method `Dataset#getData(long[] sliceOffset, int[] sliceDimensions)` allowing you to read sections of a dataset that would otherwise be too large in memory. Note: chunked dataset slicing support is still missing.  https://github.com/jamesmudd/jhdf/issues/52 https://github.com/jamesmudd/jhdf/pull/361
 - Fix OSGi `Export-Package` header resulting in API access restriction when running in OSGi. https://github.com/jamesmudd/jhdf/issues/365 https://github.com/jamesmudd/jhdf/pull/367
diff --git a/README.md b/README.md
index 3f5493c5..eb38e786 100644
--- a/README.md
+++ b/README.md
@@ -17,15 +17,13 @@ try (HdfFile hdfFile = new HdfFile(Paths.get("/path/to/file.hdf5")) {
 
 For an example of traversing the tree inside a HDF5 file see [PrintTree.java](jhdf/src/main/java/io/jhdf/examples/PrintTree.java). For accessing attributes see [ReadAttribute.java](jhdf/src/main/java/io/jhdf/examples/ReadAttribute.java).
 
-## Why did I start jHDF?
-Mostly it's a challenge, HDF5 is a fairly complex file format with lots of flexibility, writing a library to access it is interesting. Also, as a widely used file format for storing scientific, engineering, and commercial data, it would seem like a good idea to be able to read HDF5 files with more than one library. In particular JVM languages are among the most widely used so having a native HDF5 implementation seems useful.
-
 ## Why should I use jHDF?
 - Easy integration with JVM based projects. The library is available on [Maven Central](https://search.maven.org/search?q=g:%22io.jhdf%22%20AND%20a:%22jhdf%22), and [GitHub Packages](https://github.com/jamesmudd/jhdf/packages/), so using it should be as easy as adding any other dependency. To use the libraries supplied by the HDF Group you need to load native code, which means you need to handle this in your build, and it complicates distribution of your software on multiple platforms.
 - The API design intends to be familiar to Java programmers, so hopefully it works as you might expect. (If this is not the case, open an issue with suggestions for improvement)
 - No use of JNI, so you avoid all the issues associated with calling native code from the JVM.
 - Fully debug-able you can step fully through the library with a Java debugger.
 - Provides access to datasets `ByteBuffer`s to allow for custom reading logic, or integration with other libraries.
+- Integration with Java logging via SLF4J
 - Performance? Maybe, the library uses Java NIO `MappedByteBuffer`s which should provide fast file access. In addition, when accessing chunked datasets the library is parallelized to take advantage of modern CPUs. `jHDF` will also allow parallel reading of multiple datasets or multiple files. I have seen cases where `jHDF` is significantly faster than the C libraries, but as with all performance issues, it is case specific, so you will need to do your own tests on the cases you care about. If you do run tests please post the results so everyone can benefit, here are some results I am aware of:
     - [Peter Kirkham - Parallel IO Improvements](http://pkirkham.github.io/pyrus/parallel-io-improvements/)
 
@@ -35,6 +33,9 @@ Mostly it's a challenge, HDF5 is a fairly complex file format with lots of flexi
 - If you want to read slices of chunked datasets (slicing of contiguous datasets is supported since v0.6.6). This is an excellent feature of HDF5, and one reason why it's suited to large datasets. Support will be added in the future, but currently it is not possible. If you would be interested in this please comment on, or react to [issue #52](https://github.com/jamesmudd/jhdf/issues/52)
 - If you want to read datasets larger than can fit in a Java array (i.e. `Integer.MAX_VALUE` elements). This issue would also be addressed by slicing.
 
+## Why did I start jHDF?
+Mostly it's a challenge, HDF5 is a fairly complex file format with lots of flexibility, writing a library to access it is interesting. Also, as a widely used file format for storing scientific, engineering, and commercial data, it would seem like a good idea to be able to read HDF5 files with more than one library. In particular JVM languages are among the most widely used so having a native HDF5 implementation seems useful.
+
 ## Developing jHDF
 - Fork this repository and clone your fork
 - Inside the `jhdf` directory run `./gradlew build` (`./gradlew.bat build` on Windows) this will run the build and tests fetching dependencies.
diff --git a/jhdf/build.gradle b/jhdf/build.gradle
index a382fdd6..2cfaae04 100644
--- a/jhdf/build.gradle
+++ b/jhdf/build.gradle
@@ -26,7 +26,7 @@ plugins {
 
 // Variables
 group = 'io.jhdf'
-version = '0.6.6'
+version = '0.6.7'
 
 compileJava {
     sourceCompatibility = "1.8"