Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upload speed to S3 #9

Open
borellim opened this issue Oct 22, 2019 · 3 comments
Open

Upload speed to S3 #9

borellim opened this issue Oct 22, 2019 · 3 comments

Comments

@borellim
Copy link
Contributor

Hello.
Is there anything that we can do to increase the upload speed to an S3 service via invenio-s3?

I compared the upload speed obtained in our app versus a direct upload to S3 with boto3 (from the same machine that serves our app), and I am getting different results. For a 1 GB file, when uploading through our app we see first 150-200 Mbps data transfer from the browser for about 1 minute, with gunicorn sitting at 99% CPU; then for about 2 minutes we see no upload from the browser, while gunicorn sits at 10-15% CPU, until the browser finally receives a 200 response (total 3 minutes). With a direct upload to S3 via boto3, instead, it takes about 13 seconds in total.

To simplify testing, I'm using a simple Flask view, in which I have the following lines that do the job:

        f = request.files['file']
        s3fs = S3FSFileStorage('s3://test_s3/test-file-2')
        s3fs.initialize(size=0, acl='private')
        s3fs.update(f.stream, acl='private')

In the real app, we actually create a record with invenio_deposit.api.Deposit.create(), then attach the file to the record, but we see the same speed as in this simple test.

Our setup is: Apache2 acting as front line server, with a reverse proxy to gunicorn on the same machine. Setting or not DEBUG=True in config.py does not seem to make a difference for this.

We are actually using our own fork of invenio-s3, with some changes that we needed to make it work (I opened PR #8 in case you find them useful), but I don't think they are relevant to issue.

I also found some code to profile requests to gunicorn: I'll paste below the result, but I'm not quite sure how to interpret it.

Thanks a lot in advance for the help!

[POST] URI /s3/upload
         130142856 function calls (130133607 primitive calls) in 226.399 seconds

   Ordered by: internal time, cumulative time
   List reduced from 1503 to 30 due to restriction <30>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
      413   96.123    0.233   96.123    0.233 {method 'poll' of 'select.poll' objects}
   269586   15.689    0.000   15.689    0.000 {method 'read' of '_ssl._SSLSocket' objects}
  8372516   14.946    0.000   42.725    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/werkzeug/wsgi.py:733(_iter_basic_lines)
 16745019   13.118    0.000   28.627    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/tempfile.py:903(write)
        2   12.813    6.406   99.145   49.573 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/werkzeug/formparser.py:531(parse_parts)
 16737045   12.508    0.000   12.508    0.000 {method 'write' of '_io.BufferedRandom' objects}
 16745022   11.310    0.000   57.705    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/werkzeug/formparser.py:427(parse_lines)
     2078    7.606    0.004    7.606    0.004 {method 'update' of '_hashlib.HASH' objects}
  1048577    5.407    0.000   19.415    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/gunicorn/http/body.py:112(read)
  8372516    3.666    0.000   46.394    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/werkzeug/wsgi.py:687(make_line_iter)
   131285    3.255    0.000    3.255    0.000 {method 'write' of '_ssl._SSLSocket' objects}
      206    3.100    0.015    3.100    0.015 {method 'read' of '_io.BufferedRandom' objects}
 16745019    2.995    0.000    2.997    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/tempfile.py:792(_check)
  3434517    2.814    0.000    2.814    0.000 {method 'write' of '_io.BytesIO' objects}
  1312794    2.354    0.000   10.290    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/gunicorn/http/unreader.py:21(read)
   132917    2.079    0.000    2.079    0.000 {method 'read' of '_io.BytesIO' objects}
    16386    1.822    0.000   22.031    0.001 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/gunicorn/http/body.py:199(read)
    16385    1.733    0.000    1.733    0.000 {method 'splitlines' of 'bytes' objects}
  8425799    1.617    0.000    1.617    0.000 {method 'append' of 'list' objects}
  8601967    1.136    0.000    1.136    0.000 {built-in method builtins.len}
  8391318    1.126    0.000    1.185    0.000 {method 'join' of 'bytes' objects}
  1048577    1.022    0.000    1.832    0.000 /home/ubuntu/.virtualenvs/archive/lib/python3.5/site-packages/gunicorn/http/unreader.py:53(unread)
  2362416    0.629    0.000    0.629    0.000 {method 'seek' of '_io.BytesIO' objects}
   131285    0.438    0.000    4.185    0.000 /usr/lib/python3.5/ssl.py:881(sendall)
  3716346    0.427    0.000    0.427    0.000 {method 'tell' of '_io.BytesIO' objects}
  1065394    0.409    0.000    0.409    0.000 {built-in method builtins.min}
  2113508    0.404    0.000    0.404    0.000 {method 'getvalue' of '_io.BytesIO' objects}
   269586    0.379    0.000   16.350    0.000 /usr/lib/python3.5/ssl.py:783(read)
   264250    0.362    0.000    6.925    0.000 /usr/lib/python3.5/ssl.py:907(recv)
       11    0.332    0.030    0.332    0.030 /usr/lib/python3.5/json/decoder.py:345(raw_decode)
@egabancho
Copy link
Member

Hi Marco!
It looks like someone was busy, this is great!

I run some tests in the past with big files and they seemed to be fine. You said you've tested the upload using the WSGI app and also using boto3 directly, do you think you could also perform a quick test just running the code you wrote before to upload the file but without involving the WSGI server? Or maybe something similar to https://github.com/inveniosoftware/invenio-s3/blob/master/tests/test_storage.py#L113 I just want to see where the time is expended when just using Invenio-S3.

I do remember having some troubles with gunicorn and big files in the past, but I can't seem to recall what was it, we eventually switched to uWSGI 😂

@borellim
Copy link
Contributor Author

Hello Esteban. Thank you for your help, and sorry for my very late reply.

I repeated the test I did last time. For some reason now the first part of the upload (the transfer from the browser to our server) has gone from 1 minute to ~30 seconds. I suspect that our cloud provider has given us more powerful vCPUs, since this is the CPU-bound part.

As for the second part of the process (the transfer from the server to the object store), I found that I can speed it up by setting a larger default_block_size when creating the S3FileSystem object. For example, setting it to 100 MB (the default is 5 MB) reduces the time for this section from 2 minutes to 30 seconds. I am going to propose a new config variable in PR #8 on invenio-s3 (that I have also left open for a while).

Finally, I repeated all this without gunicorn, using instead the builtin Flask server (via invenio run): this didn't seem to make any difference, except that the python process is now at 100% CPU rather than gunicorn during the first part of the process.

This is already a nice improvement for us (I can now upload a 1GB file in 1 minute). It's still not as fast as Zenodo's upload, but Zenodo seems to use the deposit API directly, while we pass via a form, which is probably not ideal. Also I am not sure if Zenodo immediately pushes deposits to an object store, or if instead they use local storage.

@egabancho
Copy link
Member

egabancho commented Mar 1, 2020

As for the second part of the process (the transfer from the server to the object store), I found that I can speed it up by setting a larger default_block_size when creating the S3FileSystem object. For example, setting it to 100 MB (the default is 5 MB) reduces the time for this section from 2 minutes to 30 seconds. I am going to propose a new config variable in PR #8 on invenio-s3 (that I have also left open for a while).

We kind of saw the same behavior and added a few changes and configuration variables already. Check #15 I think it's what you are looking for, it should get merged and released in the near future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants