Chunked upload using Zephyr's http client with HTTP_POST

I am trying to send a file that is larger than the amount of RAM in my processor. I am using the Zephyr RTOS.

Sending a file that is 8K or less using HTTP_POST works as expected. In this scenario, the entire file is provided to Zephyr’s http_client_req().

When the header is changed from empty to const char *headers[] = { "\"transfer-encoding\": \"chunked\"", NULL }, then I get the following:
Response data: HTTP/1.1 201 Created
Date: Mon, 17 Jun 2024 19:21:09 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 33
Connection: keep-alive
cache-control: no-cache, no-store, must-revalidate
pragma: no-cache
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
*access-control-allow-origin: **
vary: origin
*access-control-expose-headers: , Authorization
access-control-allow-credentials: true
Strict-Transport-Security: max-age=31536000

{“success”: “File created”}
Response code: 201
Upload status: 885

The file is created with length 0. It seems that the socket is closed when data is sent in the send payload callback.

Am I missing something in the header or does a workflow need to be modified to support chunked uploads? Is there a chunked example that I have missed?

What endpoint are you hitting here? {“success”: “File created”} is not a response we return from the API. Is this an experience endpoint or webhook request? And what sort of file are you trying to send?

It is an experience endpoint.

url: /api/device-file-upload/112233445566/log.0051

I am sending a text file in this example. However, I also need to send binary files.

We did some testing with chunked encoding on Experience Endpoints and did not hit any issues. You can test this yourself with the following curl command:

curl --http1.1 --request POST 'https://YOUR-ENDPOINT' -H transfer-encoding:chunked --header 'content-type:text/plain' --data-binary "@/Users/brandon/path/to/file.csv" --trace-ascii -

This command will upload a file to your endpoint. The --trace-ascii - flag will print debug information to the console containing information about each chunk transferred.

One thing to consider is that transfer-encoding: chunked is not valid for HTTP 2. Experience Endpoints do support HTTP 2 and many clients will default to HTTP 2. That’d be the first thing to look into.

I believe my client is using “HTTP/1.1”.

Could you point me to the workflow that you used with your test? Is a Webhook required?

I think that our workflow is closing the socket after the first post.

Handling chunked-encoded data is done outside of the workflow engine. The message is only passed to the workflow when all data is received. This means the workflow only requires an Endpoint or Webhook trigger (they both work the same way).

I tested this using curl. curl only breaks up messages larger than MAX_INITIAL_POST_SIZE (64kb). If you want to test yourself, you’ll need to pick a file larger than 64kb but smaller than 256kb (max payload size).

Inspecting the curl output, we can see the separation of chunks. This screenshot below is showing the final chunk (5378 bytes).

I think that our workflow is closing the socket after the first post.

I just want to confirm that your client is not performing separate POSTs. Chunked encoding is done by opening the connection with a single POST and then sending just the data while the socket remains open. The client then sends a terminating chunk which closes the connection and completes the request.

Backing up a little, chunked encoding should not be necessary for this scenario. Chunked encoding is usually used when servers are replying to clients and the server does not know the content length up front.

I’m not super familiar with Zephyr, but looks like you’d use the payload callback. In that function you’d have a loop reading parts of the file (however large can fit in memory) and then use the socket send function to write that part to the socket. You can invoke send multiple times to get the entire contents in a single POST request without any special encoding.

I only get a single callback and then the socket is closed.

I’ll try without putting chunked into the header.

I only get a single callback and then the socket is closed.

This is correct. In that single callback, you can invoke send multiple times. I wouldn’t expect the connection to close until you return from the callback function.

I’m not sure why the socket is closing. I am now testing sending a smaller binary file.

Can you try your example with File Create instead of CSV decode?

What do I set the File Contents Template to in order to save the file as binary? I send 37 bytes and I expect the data in the output file to be 37 bytes (not a comma separated file with 37 values). Is saving in binary even possible with File Create?

https://files.onlosant.com/622a42d69ac44edc3def2017/deviceLogs/112233445566/test8.bin

Hi Brandon/Losant Team,

Any insight regarding Andrew’s questions re: binary files and file create node?

FYI, Andrew is doing some work for us (Carmanah), and this particular effort is for our next IOT product release, which is scheduled to go live this fall, so there is some time crunch to get this solved.

Thanks in advance,
Stephen

What’s on the payload is an array of the raw bytes that make up the file. You can use a function node and the Buffer object to encode the data in a form the File: Create Node accepts.

In the screenshot above, I’m encoding the data as a base64 string. Here is the code:

// Create buffer from array of byte values.
let buf = Buffer.from(payload.data.file);

// Encode the buffer as a Base64 string.
payload.encoded = buf.toString('base64');

Next, we’re passing the Base64 string to the File: Create Node. You’ll have to ensure the File Encoding property is set to Base64. You’ll also want to use triple curly braces to remove the HTML escaping.

As a confirmation, I downloaded the file locally and inspected it’s raw contents to make sure I ended up with the same raw bytes:

1 Like

With Zephyr, use payload callback and length without transfer-encoding:chunked in header.