[![Build status on GitLab CI][gitlab-ci-master-badge]][gitlab-ci-link] [![Latest release][release-badge]][release-link] [![Docker image][docker-image-badge]][docker-image-link] ![Project license][repo-license-badge]
A fork of Mozilla's Firefox Send fork by send.vis.ee. Mozilla discontinued Send, this fork is intended to be used as an internal service for user uploads.
Major changes to the original work:
- FxA authorization is active by default, to deactivate it, please set env variable
FXA_REQUIRED
tofalse
(please notice the behavior in that case wasn't properly tested as we are not going to provide the access to anybody except our users) - the
/api/download/token
,/api/metadata/
endpoints are protected with FxA authorization - the
DEFAULT_EXPIRE_SECONDS
env variable is set to0
by default (so the service would keep the uploaded files forever by default) - the partial/resumable uploads/downloads are implemented
- Forked at Mozilla's last publicly hosted version
- Mozilla & Firefox branding is removed so we can legally self-host
The original project by Mozilla can be found here.
The mozilla-master
branch holds the master
branch
as left by Mozilla.
The send-v3
branch holds the commit tree of Mozilla's last
publicly hosted version, which this fork is based on.
The send-v4
branch holds the commit tree of Mozilla's last
experimental version which was still a work in progress (featuring file
reporting, download tokens, trust warnings and FxA changes), this has
selectively been merged into this fork.
Please consider to donate to allow me to keep working on this.
Thanks Mozilla for building this amazing tool!
Docs: FAQ, Encryption, Build, Docker, More
- What it does
- Requirements
- Development
- Commands
- Configuration
- Localization
- Contributing
- Instances
- Deployment
- Clients
- License
- Partial upload
A file sharing experiment which allows you to send encrypted files to other users.
- Node.js 18.x
- Redis server (optional for development)
- AWS S3 or compatible service (optional)
To start an ephemeral development server, run:
npm install
npm start
Then, browse to http://localhost:8080
Command | Description |
---|---|
npm run format |
Formats the frontend and server code using prettier. |
npm run lint |
Lints the CSS and JavaScript code. |
npm test |
Runs the suite of mocha tests. |
npm start |
Runs the server in development configuration. |
npm run build |
Builds the production assets. |
npm run prod |
Runs the server in production configuration. |
npm run docker |
Builds the Docker image |
The server is configured with environment variables. See server/config.js for all options and docs/docker.md for examples.
See: docs/localization.md
Pull requests are always welcome! Feel free to check out the list of "good first issues" (to be implemented).
Find a list of public instances here: https://github.com/timvisee/send-instances/
See: docs/deployment.md
Docker quickstart: docs/docker.md
AWS example using Ubuntu Server 20.04
: docs/AWS.md
- Web: this repository
- Command-line:
ffsend
- Android: see Android section
- Thunderbird: FileLink provider for Send
The android implementation is contained in the android
directory,
and can be viewed locally for easy testing and editing by running ANDROID=1 npm start
and then visiting http://localhost:8080. CSS and image files are
located in the android/app/src/main/assets
directory.
Mozilla Public License Version 2.0
qrcode.js licensed under MIT
Please note that using the plain HTTP API does not mandate encryption, for which the client is responsible of. As such, also completely unprotected data can be uploaded, but they will not be shareable via the web interface.
The server-side implementation (Tus protocol)
stores incomplete files into a local directory, by default under the temporary
directory, but can be explicitly configured by specifying an absolute local
directory path (best using environment variable RESUMABLE_FILE_DIR
).
The API to use this is protected with FxA OAuth authentication to avoid exploitation of this functionality by anonymous clients.
The API flow can be summarized as follows:
- initiate a resumable upload:
POST /api/upload
- upload chunks of data:
PATCH /api/upload/<id>
- complete the file upload:
POST /api/upload/<id>/done
While you also can simply upload a complete file in one request to the
/api/upload
endpoint, the following criteria must be met to detect a
resumable upload:
- HTTP request header
Upload-Length
must be specified (total file size in bytes) - HTTP request must not have a body payload (HTTP header
Content-Length
should be absent as well)
The responses for resumable uploads will have no payload and all required infromation will be delivered to the client via response status code and headers.
Let's upload our package-lock.json
in multiple chunks or 512 KiB each using
curl
on the command line.
In order to copy the file into desired chunks, we can use a command like this:
split -b 512K --additional-suffix=.chunk package-lock.json data.
...which will give us the following files (overall size and number of chunks may differ for you):
$ ls -1sk data.*.chunk
512 data.aa.chunk
512 data.ab.chunk
508 data.ac.chunk
If your system does not have the
split
command, you should have a similar command available or install the GNU core utilities.
Initiate resumable upload:
curl -v 'http://localhost:1234/api/upload' \
--header "Authorization: Bearer ${OAUTH_TOKEN}" \
-X POST \
--header 'Upload-Length: 1564869'
...which will respond with HTTP status 201
(created), a bunch of useful
headers, most noteably Location
, which contains the actual URL to continue
with uploading the chunks, e.g.:
Location: http://localhost:1234/api/upload/d77b41ebfc81e100c2c8f33cd68702b8
Upload data chunks:
Now that we have our specific endpoint, we can upload our chunks, which will be done using commands as follows:
curl -v 'http://localhost:1234/api/upload/d77b41ebfc81e100c2c8f33cd68702b8' \
--header "Authorization: Bearer ${OAUTH_TOKEN}" \
-X PATCH \
--header 'Content-Type: application/offset+octet-stream' \
--header 'Upload-Offset: 0' \
--data-binary @data.aa.chunk
Please note here, that we need to specify the byte-offset of the chunk we are uploading, since there is no information in the data chunk itself where it is located within the complete file.
Also note that if you try to re-upload an already-completed data chunk or
upload a chunk out-of-order (e.g. leaving a chunk out), the service will
respond with a HTTP 409
(conflict). This also means that concurrently
uploading data chunks is not allowed.
A successful chunk upload will respond with a HTTP 204
(no content) and
return the next offset re-using the header Upload-Offset
in the response,
which would be 524288
for our first chunk, but 1048576
for the second.
When the last chunk has been uploaded, the value of the Upload-Offset
response header will be equal to the file size in bytes and signal a
data-completion of the upload.
Finalize upload:
The last remaining step is to finalize the upload, for which we can also
specify additional metadata as for single-request uploads (just append /done
to the resumable chunk upload URL):
curl -v 'http://localhost:1234/api/upload/d77b41ebfc81e100c2c8f33cd68702b8/done' \
--header "Authorization: Bearer ${OAUTH_TOKEN}" \
-X POST \
--header 'Content-Type: application/octet-stream' \
--header 'X-File-Metadata: eyJmaWxlbmFtZSI6InBhY2thZ2UtbG9jay5qc29uIn0K'
...which will respond with a HTTP 200
(OK) and a JSON response as follows:
{
"url": "http://localhost:1234/api/download/eae1d3af22abfa0b",
"owner": "e114ec6ec5bf228bf046",
"id": "eae1d3af22abfa0b"
}
In order to support parallel and/or resumable downloads of large files,
the download API endpoint (at /api/download/...
) also supports the standard
HTTP Range
header.