Skip to content

Embedded AppSec

scriptingxss edited this page Feb 27, 2017 · 3 revisions

Embedded AppSec Best Practices (v1 Draft)

( Wiki Page)

Executive Summary

Every year the prevalent use of embedded software within enterprise and consumer devices continues to rise exponentially. With widespread publicity of the Internet of Things, more and more devices are becoming network connected evidencing how essential it is to create secure coding guidelines for embedded software. Embedded Application Security is often not a high priority for embedded developers when they are producing devices such as Routers, Managed Switches, Medical Devices, Industrial Control Systems (ICS), VoIP phones, IoT devices, and ATM Kiosks due to other challenges outside of development. Other challenges developers face may include, but are not limited to, the Original Design Manufacturer (ODM) supply chain, limited memory, a small stack, and the challenge of pushing firmware updates securely to an endpoint. The goal of this project is to create a list of best practices, provide practical guidance to embedded developers, and to draw on the resources that OWASP already has to bring application security expertise to the embedded world. It is important to note, each of the items and guidance points listed below are longstanding within software security. This document purely tailors issues that OWASP has previously provided guidance upon (e.g. OWASP Top 10, Mobile Top 10, etc..) to the embedded community.#

_

Executive Summary

1. Buffer and Stack Overflow Protection

Additional References

  1. Injection Prevention

Additional References:

  1. Firmware Updates and Cryptographic Signatures

Additional References

  1. Usage of Secrets and Keys

Additional References

5.Identity Management

Additional References

  1. Embedded Framework and C-Based Toolchain Hardening

Additional References

  1. Usage of Debugging Code and Interfaces

Additional References

  1. Transport Layer Security

Additional References

  1. Usage of Data collection and Storage - Privacy

Additional References

  1. Third Party Code and Components

Additional References

Project Leaders

Contributors

1. Buffer and Stack Overflow Protection

Prevent the use of known dangerous functions and APIs in efforts to protect against memory-corruption vulnerabilities within firmware. (e.g. Use of unsafe C functions - strcat, strcpy, sprintf, scanf) Memory-corruption vulnerabilities such as buffer overflows can consist of overflowing the stack ( Stack overflow) or overflowing the heap ( Heap overflow). For simplicity purposes, this document does not distinguish between these two types. In the event a buffer overflow has been detected and exploited by an attacker, the instruction pointer register is overwritten to execute the arbitrary malicious code provided by the attacker.

**Finding Vulnerable C functions in source code. Example: Utilize the "find" command below within a "C" repository to find vulnerable C functions such as "strncpy" and "strlen" in source code. **

find . -type f -name '\*.c' -print0|xargs -0 grep -e 'strncpy.\*strlen'|wc -l

Usage of deprecated functions, Noncompliant Code Example:This noncompliant code example assumes that gets() will not read more than BUFSIZ - 1 characters from stdin. This is an invalid assumption, and the resulting operation can cause a buffer overflow. Note further that BUFSIZ is a macro integer constant, defined in stdio.h, representing a suggested argument to setvbuf() and not the maximum size of such an input buffer.

The gets() function reads characters from the stdin into a destination array until end-of-file is encountered or a newline character is read. Any newline character is discarded, and a null character is written immediately after the last character read into the array.

#include <stdio.h>

void func(void) {

char buf[BUFSIZ];

if (gets(buf) == NULL) {

`/\* Handle error \*/`

}

}

Compliant Example : The fgets() function reads, at most, one less than a specified number of characters from a stream into an array. This solution is compliant because the number of bytes copied from stdin to buf cannot exceed the allocated memory:

#include <stdio.h>

#include <string.h>

enum { BUFFERSIZE = 32 };

void func(void) {

char buf[BUFFERSIZE];

int ch;

if (fgets(buf, sizeof(buf), stdin)) {

/\* fgets succeeds; scan for newline character \*/

char \*p = strchr(buf, &#39;\n&#39;);

if (p) {

  \*p = &#39;\0&#39;;

} else {

  /\* Newline not found; flush stdin to end of line \*/

  while (((ch = getchar()) != &#39;\n&#39;)

        &amp;&amp; !feof(stdin)

        &amp;&amp; !ferror(stdin))

    ;

}

} else {

/\* fgets failed; handle error \*/

}

}

**# [ANNOTATION:

BY 'Carl Shaw' ON '2017-01-26T06:04:18' NOTE: 'Would this not be better in the "Securing Sensitive Information" section as it is how to prevent information leakage?X'] **Storing Sensitive Data, Noncompliant Example: In this example, sensitive information stored in the dynamically allocated memory referenced by secret is copied to the dynamically allocated buffer, new_secret, which is processed and eventually deallocated by a call to free(). Because the memory is not cleared, it may be reallocated to another section of the program where the information stored in new_secret may be unintentionally leaked.

| char *secret; /* Initialize secret */ char *new_secret;size_t size = strlen(secret);

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-21T00:43:02' NOTE: 'design dev process would make sense to put in the document. Have a look at my defcon presentation for these details that we can put in the next version - https://docs.google.com/presentation/d/1fMh9n1O\_5Da54MfZgTvFJigMh34BR4gD08GFEQZd7mY&#39;]

[ANNOTATION:

BY 'Laurynas Riliskis' ON '2017-01-18T01:40:27' NOTE: 'It seems would be feasible to add a section about the design/development process not only about the code pitfalls?']

[ANNOTATION:

BY 'Dominig Ar Foll' ON '2017-01-17T16:39:10' NOTE: 'Best practice to avoid these type off error is to integrate a static analysis phase and enforce the right compiler option in the Continuous Integration process.' NOTE: 'In my experience that is the only practical solution. This is done in tizen, it's a pain at start but it works.']

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-10T12:32:49' NOTE: 'This example demonstrates that the "secret" variable which then copied over to "new_secret" does not sanitize (\ 0) its buffer before freeing the new_secret. To your point, SIZE_MAX variable is a macro (in stdint.h) which is set to a minimum of 65535 by default. SIZE_MAX is the largest possible value that a size_t could take, so it is not possible to have anything larger than SIZE_MAX. This should help you here https://www.securecoding.cert.org/confluence/display/c/EXP34-C.+Do+not+dereference+null+pointers?focusedCommentId=15630400#comment-15630400&#39;]

[ANNOTATION:

BY 'Frank Meza' ON '2017-01-21T00:43:02' NOTE: 'Should this be?' NOTE: '' NOTE: 'if (size >= SIZE_MAX) {' NOTE: '' NOTE: ' /* Handle error */' NOTE: '' NOTE: '}'] if (size == SIZE_MAX) {

/* Handle error */} new_secret = (char *)malloc(size+1);if (!new_secret) { /* Handle error */}strcpy(new_secret, secret); /* Process new_secret... */ free(new_secret);new_secret = NULL;

Storing Sensitive Data, Compliant Example : To prevent information leakage, dynamic memory containing sensitive information should be sanitized before being freed. Sanitization is commonly accomplished by clearing the allocated space (that is, filling the space with '\0' characters).

char *secret;

/* Initialize secret */

char *new_secret;

size_t size = strlen(secret);

if (size == SIZE_MAX) {

/* Handle error */

}

/* Use calloc() to zero-out allocated space */

new_secret = (char *)calloc(size+1, sizeof(char));

if (!new_secret) {

/* Handle error */

}

strcpy(new_secret, secret);

/* Process new_secret... */

/* Sanitize memory */

memset_s(new_secret, '\0', size);

free(new_secret);

new_secret = NULL;

strncat() is a variation on the original strcat() library function. Both are used to append one NULL terminated C string to another. The danger with the original strcat() was that the caller might provide more data than can fit into the receiving buffer, thereby overrunning it. The most common result of this is a segmentation violation. A worse result is the silent and undetected corruption of whatever followed the receiving buffer in memory.

strncat() adds an additional parameter allowing the user to specify the maximum number of bytes to copy. This is NOT the amount of data to copy. It is NOT the size of the source data. It is a limit to the amount of data to copy and is usually set to the size of the receiving buffer.

Compliant Example usage of "strncat"** :**

char buffer[SOME_SIZE];

strncat( buffer, SOME_DATA, sizeof(buffer)-1);

NonCompliant Example usage of "strncat" :

strncat( buffer, SOME_DATA, strlen( SOME_DATA ));

The screenshot below demonstrates stack protection support being enabled while building a firmware image utilizing buildroot.

Considerations:

  • What kind of buffer and where it resides: physical, logical, virtual memory

●●

[ANNOTATION:

BY 'Anonymous' ON '2017-01-23T21:13:52' NOTE: 'these would seem to fit better in section 4 (regarding secure handling of secrets) as they are not really overflow related.'] What data will remain when the buffer is freed or left around to LRU out

  • What strategy will be followed to insure old buffer do not leak data (example: clear buffer after use)
  • Initialize buffers to known value on allocation
  • Consider where variables are stored: stack, static or allocated structure

●●

[ANNOTATION:

BY 'Anonymous' ON '2017-01-23T21:16:22' NOTE: 'same comment as above (move to section 4), but note that many compilers will optimize out such protective code, so each compiler needs explicit testing for this. more here: http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html&#39;] Dispose and securely wipe sensitive information stored in buffers or temporary files during runtime after they are no longer needed (e.g. Wipe buffers from locations where personally identifiable information(PII) is stored before releasing the buffers)

  • Explicitly initialize variables
  • Ensure secure compiler flags or switches are utilized upon each firmware build. (e.g. For GCC -fPIE, -fstack-protector-all, -Wl,-z,noexecstack, -Wl,-z,noexecheap etc.. See additional references section for more details.)
  • Use safe equivalent functions for known vulnerable functions such as (non-exhaustive list below):
    • gets() -> fgets()
    • strcpy() -> strncpy()
    • strcat() -> strncat()
    • sprintf() -> snprintf()
  • Those functions that do not have safe equivalents should be rewritten with safe checks implemented

Additional References

  1. Injection Prevention

Ensure all untrusted data and user input is validated, sanitized, and/or output encoded to prevent from unintended system execution. There are various injection attacks within application security such as operating system (OS) command injection, JavaScript injection (XSS), SQL injection, and others such as XPath injection. However, the most prevalent of the injection attacks within embedded software pertain to OS command injection; when an application accepts untrusted/insecure input and passes it to external applications (either as the application name itself or arguments) without validation or proper escaping.

Noncompliant Code Example of using operating system calls:

In this noncompliant code example, the system() function is used to execute any_cmd in the host environment. Invocation of a command processor is not required.

#include <string.h>

#include <stdlib.h>

#include <stdio.h>

enum { BUFFERSIZE = 512 };

void func(const char *input) {

char cmdbuf[BUFFERSIZE];

int len_wanted = snprintf(cmdbuf, BUFFERSIZE,

                        &quot;any\_cmd &#39;%s&#39;&quot;, input);

if (len_wanted >= BUFFERSIZE) {

/\* Handle error \*/

} else if (len_wanted < 0) {

/\* Handle error \*/

} else if ( system (cmdbuf) == -1) {

/\* Handle error \*/

}

}

If this code is compiled and run with elevated privileges on a Linux system, an attacker can create an account by entering the following string: any_cmd 'happy'; useradd 'attacker' which would be interpreted as:

any_cmd 'happy';

useradd 'attacker'

Compliant Example :In this compliant solution, the call to system() is replaced with a call to execve(). The exec family of functions does not use a full shell interpreter, so it is not vulnerable to command-injection attacks, such as the one illustrated in the noncompliant code example.

The execlp(), execvp(), and (nonstandard) execvP() functions duplicate the actions of the shell in searching for an executable file if the specified filename does not contain a forward slash character (/). As a result, they should be used without a forward slash character (/) only if the PATH environment variable is set to a safe value.

The execl(), execle(), execv(), and execve() functions do not perform path name substitution.

Additionally, precautions should be taken to ensure the external executable cannot be modified by an untrusted user, for example, by ensuring the executable is not writable by the user. This compliant solution is significantly different from the preceding noncompliant code example. First, input is incorporated into the args array and passed as an argument to execve(), eliminating concerns about buffer overflow or string truncation while forming the command string. Second, this compliant solution forks a new process before executing "/usr/bin/any_cmd" in the child process. Although this method is more complicated than calling system(), the added security is worth the additional effort.

#include <sys/types.h>

#include <sys/wait.h>

#include <unistd.h>

#include <errno.h>

#include <stdlib.h>

void func(char *input) {

pid_t pid;

int status;

pid_t ret;

char *const args[3] = {"any_exe", input, NULL};

char **env;

extern char **environ;

/* ... Sanitize arguments ... */

pid = fork();

if (pid == -1) {

/\* Handle error \*/

} else if (pid != 0) {

while ((ret = waitpid(pid, &amp;status, 0)) == -1) {

  if (errno != EINTR) {

    /\* Handle error \*/

    break;

  }

}

if ((ret != -1) &amp;&amp;

  (!WIFEXITED(status) || !WEXITSTATUS(status)) ) {

  /\* Report unexpected child status \*/

}

} else {

/\* ... Initialize env as a sanitized copy of environ ... \*/

if (execve(&quot;/usr/bin/any\_cmd&quot;, args, env) == -1) {

  /\* Handle error \*/

  \_Exit(127);

}

}

}

Considerations:

[ANNOTATION:

BY 'Dominig Ar Foll' ON '2017-01-17T17:02:32' NOTE: 'The biggest challenge is to define how you can upgrade an already compromised system.' NOTE: 'The propose model here only covers the check of the integrity of the kernel but the real question is how that check is done and how the new Flashing is done when the system is already compromised.' NOTE: 'There is no one model fits all, as depending of HW architecture, many solution are provided but as a general rule, the use of a trusted zone or/and secure boot is key but dual partition is also a common solution.' NOTE: 'In AGL we use integrity of the file system and read only mode to reduce that risk, but other model can be selected.' NOTE: 'http://docs.automotivelinux.org/docs/architecture/en/dev/reference/security/05-security-concepts.html#code-integrity-during-execution&#39;] Firmware Updates and Cryptographic Signatures

Ensure robust update mechanisms utilize cryptographically signed firmware images upon download and when applicable, for updating functions pertaining to third party software. Cryptographic signature allows for verification that files have not been modified or otherwise tampered with since the developer created and signed them. The signing and verification process uses public-key cryptography and it is difficult to forge a digital signature (e.g. PGP signature) without first gaining access to the private key.

[ANNOTATION:

BY 'John Markh' ON '2017-01-20T23:05:35' NOTE: 'It is not as simple as that. True, the developers will have to revoke their (compromised) private key and generate a new pair, but more importantly, there is also a challenge of distributing the new public key to all parties using the software / firmware. What I mean by that is not the sync the key with the PGP keyserver(s), but informing the end-users to revoke the previously used key and download a new public key from a trusted keyserver.' NOTE: '' NOTE: 'The document, in my opinion, should suggest good key management practices and reference external best practices.'] In the event a private key is compromised, developers of the software must revoke the compromised key and will need to re-sign all previous firmware releases with the new key.

Verifying a kernel image signature Example:

Downloading the kernel images

wget https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.6.6.tar.xz

wget https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.6.6.tar.sign

Download the public key from a PGP keyserver in order to verify the signature.

gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 38DBBDC86092693E

gpg: /root/.gnupg/trustdb.gpg: trustdb created

gpg: key 38DBBDC86092693E: public key "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>" imported

gpg: no ultimately trusted keys found

gpg: Total number processed: 1

gpg: imported: 1

Uncompressing and verifying the .tar firmware image against the signature:

xz -cd linux-4.6.6.tar.xz | gpg2 --verify linux-4.6.6.tar.sign -

gpg: Signature made Wed 10 Aug 2016 06:55:15 AM EDT

gpg: using RSA key 38DBBDC86092693E

gpg: Good signature from "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>" [unknown]

gpg: WARNING: This key is not certified with a trusted signature!

gpg: There is no indication that the signature belongs to the owner.

Primary key fingerprint: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 6092 693E

Notice the WARNING: This key is not certified with a trusted signature! You will now need to verify that the key used to sign the archive really does belong to the owner (in our example, Greg Kroah-Hartman). There are several ways you can do this:

  1. Use the Kernel.org web of trust. This will require that you first locate the members of kernel.org in your area and sign their keys. Short of meeting the actual owner of the PGP key in real life, this is your best option to verify the validity of a PGP key signature.
  2. Review the list of signatures on the developer's key by using "gpg --list-sigs". Email as many people who have signed the key as possible, preferably at different organizations (or at least different domains). Ask them to confirm that they have signed the key in question. You should attach at best marginal trust to the responses you receive in this manner (if you receive any).
  3. Use the following site to see trust paths from Linus Torvalds' key to the key used to sign the tarball: pgp.cs.uu.nl. Put Linus's key into the "from" field and the key you got in the output above into the "to" field. Normally, only Linus or people with Linus's direct signature will be in charge of releasing kernels.

If you get "BAD signature"

If at any time you see "BAD signature" output from "gpg --verify", please first check the following first:

  1. Make sure that you are verifying the signature against the .tar version of the archive, not the compressed (.tar.xz) version.
  2. Make sure the the downloaded file is correct and not truncated or otherwise corrupted.

Demonstrating #1 above, verifying a signature incorrectly Example :

gpg --verify linux-4.6.6.tar.sign linux-4.6.6.tar. xz

gpg: Signature made Wed 10 Aug 2016 06:55:15 AM EDT

gpg: using RSA key 38DBBDC86092693E

gpg: BAD signature from "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>" [unknown]

Verifying a signature correctly Example :

gpg --verify linux-4.6.6.tar.sign linux-4.6.6. tar

gpg: Signature made Wed 10 Aug 2016 06:55:15 AM EDT

gpg: using RSA key 38DBBDC86092693E

gpg: Good signature from "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>" [unknown]

gpg: WARNING: This key is not certified with a trusted signature!

gpg: There is no indication that the signature belongs to the owner.

Primary key fingerprint: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 6092 693E

Considerations:

  • Ensure robust update mechanisms utilize cryptographically signed firmware images for updating functions.
  • Ensure updates are over TLS 1.2 (or higher).
    • Ensure updates validate the public key and certificate chain of the update server
  • Include a feature to utilize automatic firmware updates upon a predefined schedule
    • Force updates in highly vulnerable use cases
    • Scheduled push updates should be taken into consideration for certain devices such as medical devices to prevent force updates from creating possible issues.
  • Ensure firmware versions are clearly displayed
  • Ensure firmware updates include changelogs with security related vulnerabilities included.
  • Ensure an anti downgrade protection (anti-rollback) mechanism is employed so that the device cannot be reverted to a vulnerable version

Additional References

[ANNOTATION:

BY 'Dominig Ar Foll' ON '2017-01-17T16:42:27' NOTE: 'On simple practical solution is to create a random new root password for each build, it's so painful that no one will ever keep to default password in deployment. Alternative is to create image with no root password, requiring to add a key after image creation.']

[ANNOTATION:

BY 'Laurynas Riliskis' ON '2017-01-18T01:56:43' NOTE: 'I think this would be beneficial to elaborate more on several strategies for firmware update.']

[ANNOTATION:

BY 'Laurynas Riliskis' ON '2017-01-18T01:56:43' NOTE: 'we could add min key size as well as MAC size and elaborate on how to deal with the resulted overhead. Especially, in BLE scenarios.']

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-14T05:01:36' NOTE: 'Please feel free to add a blurb no min key size when you have a moment.']

[ANNOTATION:

BY 'John Markh' ON '2017-01-20T23:05:28' NOTE: 'This section as well as the included example do not really deal with keys - that is more or less covered in the previous section (see 3. Firmware Update the Cryptographic Signatures).'] Securing Sensitive InformationUsage of Secrets and Keys

Do not hardcode secrets such as passwords, usernames, tokens, private keys or similar variants into firmware release images. This also includes the storage of sensitive data that is written to

[ANNOTATION:

BY 'Carl Shaw' ON '2017-01-26T07:30:01' NOTE: 'The majority of embedded devices have flash based non-volatile storage rather than disk and very few use external swap.' NOTE: 'Maybe we need to emphasise that sensitive data stored to non-volatile storage needs to be protected with strong cryptography.' NOTE: 'SE/TEE should also be used for processing sensitive data and not just for storing it so that it is isolated as much as possible.'] disk. If hardware security element (SE) or Trusted Execution Environment (TEE) is available, it is recommended to utilize such features for storing sensitive data. Otherwise, use of strong cryptography should be evaluated to protect the data.

If possible, all sensitive data in clear-text should be ephemeral by nature and reside in a volatile memory only.

Noncompliant Hardcoded Password Example:

int VerifyAdmin(char *password) {

if (strcmp(password, "Mew!")) { printf("Incorrect Password!\n"); return 0; }

printf("Entering Diagnostic Mode\n"); return 1; }

Noncompliant Storing sensitive data to disk Example:

In this noncompliant code example, sensitive information is supposedly stored in the dynamically allocated buffer, secret, which is processed and eventually cleared by a call to memset_s(). The memory page containing secret can be swapped out to disk. If the program crashes before the call to memset_s() completes, the information stored in secret may be stored in the core dump.

char *secret;

secret = (char *)malloc(size+1);

if (!secret) {

/* Handle error */

}

/* Perform operations using secret... */

memset_s(secret, '\0', size+1);

free(secret);

secret = NULL;

To prevent the information from being written to a core dump, the size of core dumps that the program will generate should be set to 0 using setrlimit():

#include <sys/resource.h>

/* ... */

struct rlimit limit;

limit.rlim_cur = 0;

limit.rlim_max = 0;

if (

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-02-01T00:36:48' NOTE: 'I think it would be fair to add a section like that down the line']

[ANNOTATION:

BY 'Carl Shaw' ON '2017-02-01T00:36:48' NOTE: 'Not related to this section, but should a section be added on how to constrain an embedded application?' NOTE: 'e.g. limit process resources using setrlimit() or limit it's capabilities / access to file system / etc.'] setrlimit(RLIMIT_CORE, &limit) != 0) {

/\* Handle error \*/

}

char *secret;

secret = (char *)malloc(size+1);

if (!secret) {

/* Handle error */

}

/* Perform operations using secret... */

memset_s(secret, '\0', size+1);

free(secret);

secret = NULL;

Alternatively, the use of mlock() can be used to prevent paging by locking memory in place. This compliant solution not only disables the creation of core files but also ensures that the buffer is not swapped to hard disk:

#include <sys/resource.h>

/* ... */

struct rlimit limit;

limit.rlim_cur = 0;

limit.rlim_max = 0;

if (setrlimit(RLIMIT_CORE, &limit) != 0) {

/\* Handle error \*/

}

long pagesize = sysconf(_SC_PAGESIZE);

if (pagesize == -1) {

/* Handle error */

}

char *secret_buf;

char *secret;

secret_buf = (char *)malloc(size+1+pagesize);

if (!secret_buf) {

/* Handle error */

}

/* mlock() may require that address be a multiple of PAGESIZE */

secret = (char *)((((intptr_t)secret_buf + pagesize - 1) / pagesize) * pagesize);

if (mlock(secret, size+1) != 0) {

/\* Handle error \*/

}

/* Perform operations using secret... */

memset_s(secret_buf, '\0', size+1+pagesize);

if (munlock(secret, size+1) != 0) {

/\* Handle error \*/

}

secret = NULL;

free(secret_buf);

secret_buf = NULL;

Considerations:

  • Do not hardcode Certificates across product lines
  • Do not hardcode passwords across product lines
  • Do not store secrets in an unprotected storage location or external storage including within an EEPROM or flash.

Additional References

5.Identity Management

[ANNOTATION:

BY 'Carl Shaw' ON '2017-01-26T07:47:26' NOTE: 'Not running everything at the highest privilege level (i.e. "root") would be good to mention explicitly. It's one of the worst common problems in embedded devices.'] User accounts within an embedded device should not be static in nature. Features that allow

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-21T00:33:15' NOTE: 'Definitely will add such verbiage. I have seen some embedded devices that do not give you the ability to change passwords/pins/paraphrases for reasons that I never understood.']

[ANNOTATION:

BY 'John Markh' ON '2017-01-21T00:33:15' NOTE: 'Not just separate users, and we need to change the wording to emphasis that we have listed only a few possible scenarios, but the end-user(s) should be able to change the passwords / passphrases for the built-in user accounts.'] separation of user accounts for internal web management, internal console access, as well as remote web management and remote console access should be available to prevent from automated malicious

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-13T05:30:30' NOTE: 'Are you suggesting we insert a few column table for the use cases mentioned ? Can you clarify how we can approach this and incorporate your suggestion?']

[ANNOTATION:

BY 'Thomas Donahoe' ON '2017-01-13T05:30:30' NOTE: 'Need to separate out use cases; Admin, Operational, Maintenance for identity controls. Your considerations may change based on each scenario.'] attacks.

Considerations:

●●

[ANNOTATION:

BY 'Robert Dobson' ON '2017-01-27T21:36:12' NOTE: 'Maybe we could consider a solution which enables you to securely update user login credentials remotely'] Static passwords utilized for web management and terminal access across product lines should not be used, or be removed as part of the release process.

●●

[ANNOTATION:

BY 'John Markh' ON '2017-01-21T00:49:06' NOTE: 'I see - it could be applicable to devices that are acting as AP, or capable to act as AP. Maybe:' NOTE: 'a/ make the statement more generic to account for other settings (as applicable to the device) that may have security implication?' NOTE: 'b/ move the consideration to a different section dealing with hardening of the device?']

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-21T00:29:49' NOTE: 'Before devices are deployed and packaged, a series of scripts are ran to ensure the proper WPA2 and WPS passwords or pins are in fact created as well as work properly. A long argument within embedded devices have been that it is too difficult to change longstanding existing processes within manufacturing scripts due to its dependance and downstream issues but in reality it is that lack of change as well as conformity within a companies culture. This was my experience working with a well known consumer manufacture.']

[ANNOTATION:

BY 'John Markh' ON '2017-01-21T00:49:06' NOTE: 'What does it mean?'] Pre-production account validation scripts should be adjusted in a similar fashion to those that validate WIFI and WPS passwords if applicable.

  • Implement remote login and local login account features for users.
  • Separation of users for SSH login and Admin login

●●Remote login should implement a temporary account lockout

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-21T00:39:12' NOTE: 'id say temporary will suffice. Depending on the context of the embedded device, i do think it would be wise to put forth protections from account brute forcing attempts for remote login into a device.']

[ANNOTATION:

BY 'Anonymous' ON '2017-01-21T00:39:12' NOTE: 'temporary or permanent ? is it wise to advise some sort of brute force delay protection where every failure extends the delay for the next attempt?'] threshold preventing from automated brute force attacks.

  • For web management interface, ensure Session IDs are not in the URL.
  • Ensure usernames and passwords are not sent over insecure protocols (e.g. HTTP, FTP and Telnet).
  • Password complexity policies should be enforced to discourage easy to guess passwords such as "Password1".
  • Ensure EEPROMs are password protected enforcing complexity requirements.

Additional References

  1. Embedded Framework and C-Based Toolchain Hardening

While configuring firmware builds, modify Busybox, embedded frameworks, and toolchains alike to only libraries and functions that are being utilized. Embedded Linux build systems such as Buildroot, Yocto and others typically perform this task. Removal of known insecure libraries and protocols such as Telnet not only minimizes attack entry points in firmware builds, but also provides a secure-by-design approach to building software in efforts to thwart potential security threats.

Hardening a library Example **: ** It is known that compression is insecure (amongst others), SSLv2 is insecure, SSLv3 is insecure, as well as early versions of TLS . In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:

$ Configure darwin64-x86_64-cc -no-hw -no-engine -no-comp -no-shared -no-dso -no-ssl2 -no-ssl3 --openssldir=

Selecting one shell Example : Utilizing buildroot, the screenshot below demonstrates only one Shell being enabled, bash. (Note: Buildroot examples are shown below but there are other ways to accomplish the same configuration with other embedded Linux build systems.)

Hardening Services Example : The screenshot below shows openssh enabled but not FTP daemons proftpd and pure-ftpd. Only enable FTP if TLS is to be utilized. For example, proftpd and pureftpd require custom compilation to use TLS with mod_tls for proftpd and passing "./configure --with-tls" for pureftpd.

Considerations (Disclaimer: The List below is non-exhaustive):

  • Ensure services such as SSH have a secure password created
  • Remove unused language interpreters such as: perl, python, lua
  • Remove dead code from unused library functions
  • Remove unused shell interpreters such as: ash, dash, zsh
    • Review /etc/shell
  • Remove legacy insecure daemons which includes but not limited to: Telnet, FTP, TFTP
  • Utilize tools such as Lynis for hardening auditing and suggestions
  • Perform iterative threat model exercises with developers as well as relative stakeholders on software running on the embedded device.

Additional References

[ANNOTATION:

BY 'Dominig Ar Foll' ON '2017-01-18T04:22:03' NOTE: 'During development what ever is done is not really the issue. The challenge is in deployment phase. IoT devices communicates and that what need to be controlled.' NOTE: 'Fire-walling is a great simple solution to reduce the surface of attack. It does not fix any more that that is already quite a lot.' NOTE: 'To reduce other access in AGL we use a Mandatory Access control. It's a powerful tool but come only with Linux OS class.']

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-18T00:44:22' NOTE: 'Hi- Can you elaborate on setting a firewall with regards to network interfaces? In what stage of developing an embedded device are you referring to? While the device is in production or when its being deployed from the factory and verifying debug interfaces before shipment? Or general ipchains and iptable rule settings before shipping to production?']

[ANNOTATION:

BY 'Dominig Ar Foll' ON '2017-01-18T04:22:03' NOTE: 'Most use interface to access a device is the Network.' NOTE: 'Setting a firewall is a must have. This should be included.'] Usage of Debugging Code and Interfaces

It is important to ensure all unnecessary pre-production build code as well as dead/unused code has been removed prior to firmware release to all market segments. This includes but is not limited to potential "backdoor code" and root privilege accounts that may have been left by parties such as Original Design Manufacturers (ODM) and Third-Party contractors. Typically this falls in scope for Original Equipment Manufacturers (OEM) to perform via reverse engineering of binaries. This should also require ODMs to sign Master Service Agreements (MSA) insuring that either no "backdoor code" is included and that all code has been reviewed for software security vulnerabilities holding all Third-Party developers accountable for devices that are mass deployed into the market.

Considerations (Disclaimer: The List below is non-exhaustive):

  • Remove backdoor accounts used for debugging, deployment verification and/or customer support purposes.
  • Ensure third party libraries and binary images are reviewed for backdoors by staff before market deployment.
    • Tools such as Binwalk, Firmadyne, IDA pro, radare2, Firmware Mod Toolkit (FMK), and various other tools listed in the additional references (#4) should be utilized for firmware analysis.

Additional References

[ANNOTATION:

BY 'Robert Dobson' ON '2017-01-27T21:30:42' NOTE: 'Transport Level Security is an interesting one.If you look at alot of network topologies then i think TLS can be limiting. Data is encrypted from client device to edge of server, data is then clear text at rest in a server. If you look at MQTT as an example, MQTT uses a broker which may sit within a cloud environment. So data would go encrypted using TLS from client device to the server where the broker is. Beyond the broker server there is then a challenge to secure the data to the forwarding location and ultimately data storage. Also, there is a significant overhead in installing certificates in the manufacture time and also no real way to renew these certificates post deploymen... an automated approach would be better here. Maybe there should be some discussion over automated security solution which is agnositc to any communications architecture.'] Transport Layer Security

Ensure all methods of communication are utilizing industry standard encryption configurations for TLS. The use of TLS ensures that all data remains confidential and untampered with while in transit. Utilize free certificate authority services such as Let's Encrypt if the embedded device utilizes domain names.

Example of how to perform a basic certificate validation against a root certificate authority, using the OpenSSL library functions. :

#include <openssl/bio.h> #include <openssl/err.h> #include <openssl/pem.h> #include <openssl/x509.h> #include <openssl/x509_vfy.h>

int main() {

const char ca_bundlestr[] = "./ca-bundle.pem"; const char cert_filestr[] = "./cert-file.pem";

BIO *certbio = NULL; BIO *outbio = NULL; X509 *error_cert = NULL; X509 *cert = NULL; X509_NAME *certsubject = NULL; X509_STORE *store = NULL; X509_STORE_CTX *vrfy_ctx = NULL; int ret;

/* ---------------------------------------------------------- * * These function calls initialize openssl for correct work. * * ---------------------------------------------------------- */ OpenSSL_add_all_algorithms(); ERR_load_BIO_strings(); ERR_load_crypto_strings();

/* ---------------------------------------------------------- * * Create the Input/Output BIO's. * * ---------------------------------------------------------- */ certbio = BIO_new(BIO_s_file()); outbio = BIO_new_fp(stdout, BIO_NOCLOSE);

/* ---------------------------------------------------------- * * Initialize the global certificate validation store object. * * ---------------------------------------------------------- */ if (!(store=X509_STORE_new())) BIO_printf(outbio, "Error creating X509_STORE_CTX object\n");

/* ---------------------------------------------------------- * * Create the context structure for the validation operation. * * ---------------------------------------------------------- */ vrfy_ctx = X509_STORE_CTX_new();

/* ---------------------------------------------------------- * * Load the certificate and cacert chain from file (PEM). * * ---------------------------------------------------------- */ ret = BIO_read_filename(certbio, cert_filestr); if (! (cert = PEM_read_bio_X509(certbio, NULL, 0, NULL))) { BIO_printf(outbio, "Error loading cert into memory\n"); exit(-1); }

ret = X509_STORE_load_locations(store, ca_bundlestr, NULL); if (ret != 1) BIO_printf(outbio, "Error loading CA cert or chain file\n");

/* ---------------------------------------------------------- * * Initialize the ctx structure for a verification operation: * * Set the trusted cert store, the unvalidated cert, and any * * potential certs that could be needed (here we set it NULL) * * ---------------------------------------------------------- */ X509_STORE_CTX_init(vrfy_ctx, store, cert, NULL);

/* ---------------------------------------------------------- * * Check the complete cert chain can be build and validated. * * Returns 1 on success, 0 on verification failures, and -1 * * for trouble with the ctx object (i.e. missing certificate) * * ---------------------------------------------------------- */ ret = X509_verify_cert(vrfy_ctx); BIO_printf(outbio, "Verification return code: %d\n", ret);

if(ret == 0 || ret == 1) BIO_printf(outbio, "Verification result text: %s\n", X509_verify_cert_error_string(vrfy_ctx->error));

/* ---------------------------------------------------------- * * The error handling below shows how to get failure details * * from the offending certificate. * * ---------------------------------------------------------- */ if(ret == 0) { /* get the offending certificate causing the failure */ error_cert = X509_STORE_CTX_get_current_cert(vrfy_ctx); certsubject = X509_NAME_new(); certsubject = X509_get_subject_name(error_cert); BIO_printf(outbio, "Verification failed cert:\n"); X509_NAME_print_ex(outbio, certsubject, 0, XN_FLAG_MULTILINE); BIO_printf(outbio, "\n"); }

/* ---------------------------------------------------------- * * Free up all structures * * ---------------------------------------------------------- */ X509_STORE_CTX_free(vrfy_ctx); X509_STORE_free(store); X509_free(cert); BIO_free_all(certbio); BIO_free_all(outbio); exit(0); }

Considerations (Disclaimer: The List below is non-exhaustive):

  • Use only TLS 1.2 and higher for new products.
  • If possible, consider using mutual-authentication to authenticate both end-points.
  • Validate the certificate public key, hostname, and chain.
  • Ensure certificate and their chains use SHA256 for signing
  • Disable deprecated SSL and early TLS versions
  • Disable deprecated, NULL and weak cipher suites
  • Ensure private key and certificates are stored securely - e.g. Secure Environment or Trusted Execution Environment, or protected using strong cryptography.
  • Keep certificates updated with up to date secure configurations.
  • Ensure proper certificate update features are available upon expiration

●●

[ANNOTATION:

BY 'Aaron Guzman' ON '2017-01-21T01:25:14' NOTE: 'id say if its a public facing embedded device with a domain name, it can still be tested via ssllabs.. What do you think? For example, if I get the external domain name of my home IP address and have remote management enabled, ssllabs will be able to scan your device's TLS config.. I should suggest sslscan, nmap --script ssl-enum-ciphers.nse , TestSSLServer.jar and sslyze for alternatives as well.']

[ANNOTATION:

BY 'John Markh' ON '2017-01-21T01:25:14' NOTE: 'This is more applicable to the "web" back-end, not the application embedded.X'] Verify TLS configurations utilizing services such as ssllabs.com

Other Example(s):

To utilize TLS, there are other options besides OpenSSL. A non-exhaustive list is below.

Formerly PolarSSL, a list of projects using mbed TLS can be found at:

Examples of implementation can be found at

Formerly CyaSSL, wolfSSL and a list of projects using wolfSSL can be found at:

Examples of implementation can be found at:

Additional References

  1. Usage of Data collection and Storage - Privacy

It is critical to limit the collection, storage, and sharing of both personally identifiable information (PII) as well as sensitive personal information (SPI). Leaked information such as Social Security Numbers can lead to customers being compromised which could have legal repercussions for manufacturers. If information of this nature must be gathered, it is important to follow the concepts of Privacy-by-Design.

Considerations (Disclaimer: The List below is non-exhaustive):

  • Determine which PII/SPI is critical for device operation and if storage of the information required for business and/or operational purpose.
  • Limit the duration of storage time to the shortest amount of time needed for device operation
  • Ensure the information stored securely - i.e. in Secure Environment, or protected using strong cryptography.
  • Provide transparency for customers by including details about what information is being collected, stored, and distributed via privacy policies.
  • Provide a mechanism to allow the device owner to perform a factory reset to remove their personal data before transfer to another user or destruction.

Additional References

  1. Third Party Code and Components

Following setup of the toolchain, it is important to ensure that the kernel, software packages, and third party libraries are updated to prevent from publicly known vulnerabilities. Software such as Rompager or embedded build tools such as Buildroot should be checked against vulnerability databases as well as their ChangeLogs to determine when and if an updated is needed. It is important to note this process should be tested by developers and/or QA teams prior to release builds as updates to embedded systems can cause issues with the operations of those systems.

Retirejs in a JavaScript project directory Example:

``$ retire .

Loading from cache: https://raw.githubusercontent.com/RetireJS/retire.js/master/repository/jsrepository.json

Loading from cache: https://raw.githubusercontent.com/RetireJS/retire.js/master/repository/npmrepository.json

/js/jquery-1.4.4.min.js

↳ jquery 1.4.4.min has known vulnerabilities: severity: medium; CVE: CVE-2011-4969; http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969 http://research.insecurelabs.org/jquery/test/ severity: medium; bug: 11290, summary: Selector interpreted as HTML; http://bugs.jquery.com/ticket/11290 http://research.insecurelabs.org/jquery/test/ severity: medium; issue: 2432, summary: 3rd party CORS request may execute; https://github.com/jquery/jquery/issues/2432 http://blog.jquery.com/2016/01/08/jquery-2-2-and-1-12-released/

/js/jquery-1.4.4.min.js

↳ jquery 1.4.4.min has known vulnerabilities: severity: medium; CVE: CVE-2011-4969; http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969 http://research.insecurelabs.org/jquery/test/ severity: medium; bug: 11290, summary: Selector interpreted as HTML; http://bugs.jquery.com/ticket/11290 http://research.insecurelabs.org/jquery/test/ severity: medium; issue: 2432, summary: 3rd party CORS request may execute; https://github.com/jquery/jquery/issues/2432 http://blog.jquery.com/2016/01/08/jquery-2-2-and-1-12-released/

/javascript/vendor/jquery-1.9.1.min.js

↳ jquery 1.9.1.min has known vulnerabilities: severity: medium; issue: 2432, summary: 3rd party CORS request may execute; https://github.com/jquery/jquery/issues/2432 http://blog.jquery.com/2016/01/08/jquery-2-2-and-1-12-released/

/javascript/vendor/jquery-migrate-1.1.1.min.js

↳ jquery-migrate 1.1.1.min has known vulnerabilities: severity: medium; release: jQuery Migrate 1.2.0 Released, summary: cross-site-scripting; http://blog.jquery.com/2013/05/01/jquery-migrate-1-2-0-released/ severity: medium; bug: 11290, summary: Selector interpreted as HTML; http://bugs.jquery.com/ticket/11290 http://research.insecurelabs.org/jquery/test/

/javascript/vendor/moment.min.js

↳ moment.js 2.10.6 has known vulnerabilities: severity: low; summary: reDOS - regular expression denial of service; https://github.com/moment/moment/issues/2936 ``

Considerations (Disclaimer: The List below is non-exhaustive):

  • Use of js for JavaScript Libraries
    • Utilize nsp for NodeJS packages
  • Utilize tools such as Lynis for basic Kernel hardening auditing and suggestions
    • wget --no-check-certificate https://github.com/CISOfy/lynis/archive/master.zip && unzip master.zip && cd lynis-master/ && bash lynis audit system
      • ■■Review the report in: /var/log/lynis.log
    • Note : Lynis will bypass Kernel checks if a Linux kernel is not in use. The following error message will be in the logs: "Skipped test KRNL-5695 (Determine Linux kernel version and release number) Reason to skip: Incorrect guest OS (Linux only)"
  • Utilize package managers (opkg, ipkg, etc.. ) or custom update mechanisms for misc libraries within the toolchain
  • Re vi ewc ha ng el og s oft oo lc ha in s,s of tw ar e pa ck ag es , an d li br ar ie s tob et te r de te rm in e ifau pd at e ne ed ed
  • Ensure the implementation of embedded build systems such as Yocto and Buildroot are set up in a way that allows for the update of all included packages

Additional References

Aaron Guzman @scriptingxss

Alex Lafrenz

Contributors

Insert your name below if you have contributed and would like to be credited.

Jim Manico