Storage efficient remote file integrity check for embedded Linux device -



Storage efficient remote file integrity check for embedded Linux device -

i have embedded linux device limited ram , flash. due limited ram , flash, need download binary file http server in little chunks , write chunk flash. problem is, can't determine integrity of file until download lastly chunk. in worst case, after getting lastly chunk of file, might find file tampered or not 'integral' expected (i have expected md5sum of file), have download in chunks , written flash.(i can mark flash downloaded area valid after lastly chunk, have wasted time , flash-life time) there way send request remote http server verify md5sum of file against expected md5sum value?

from understand problem give-and-take in comments, high-level image assuming can add together stuff server.

on client side:

request list of running checksums (ci)i=1,…,n file f in chunks of m bytes server. create hash context c. request file f server. repeat every m byte chunk (bi)i=1,…,n received: update hash context: update(c, bi) compute current digest: di ← digest(c, bi) if di ≠ ci: abort transfer, study error, seek again, whatever… save chunk bi disk.

on server side:

if client requests list of running checksums (ci)i=1,…,n file f in chunks of m bytes: create hash context c. repeat every m byte chunk (bi)i=1,…,n of f: update hash context: update(c, bi) compute current digest: di ← digest(c, bi) send di client. else if client requests file f: send f client.

this scheme lets request list of running checksums (maybe text file 1 digest per line) in chunks file file.dat of 1 mib via normal http request http://example.com/checksums?algorithm=md5;file=file.dat;chunksize=1048576. actual file info can later requested http://example.com/file.dat.

alternatively – if think clients want checksums don't need fine-grained command on algorithm or chunk size – add together additional http headers , create server's reply this:

http/1.1 200 ok content-type: application/octet-stream content-length: 52428800 my-checksum-algorithm: md5 my-checksum-chunk-size: 1048576 my-checksum-chunk: chunk=0, digest=c9a3a83280571697868f12e74e4ede4f my-checksum-chunk: chunk=1, digest=d0c13dff943c5b67f411732304b6f46f my-checksum-chunk: chunk=2, digest=34465c3e2e2eb2576d46253bea5cfc44 my-checksum-chunk: ... my-checksum-total: f2bf55ff8b38dc667b91b6b988cdf940 here goes data...

it should not hard parse headers extract required information. of course, format of headers need adapted specific needs.

if using chunked transfer encoding, might want add together checksums along each chunk, not @ origin save server process file twice.

note of above can help observe accidental info corruption. (which tcp tries create unlikely i'm not sure how much gain beingness overly pessimistic.) scheme cannot protect against man-in-the-middle attacks. if concerns you, should found trusted tls connection (https) , transfer file. https cannot protect if breaks server. if possibility should handled well, sign info openpgp , verify integrity of signature. of course, private key used create signature must not stored on server.

linux

Comments

Popular posts from this blog

Delphi change the assembly code of a running process -

json - Hibernate and Jackson (java.lang.IllegalStateException: Cannot call sendError() after the response has been committed) -

C++ 11 "class" keyword -