Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarking script for modify_file function #1368

Open
wants to merge 145 commits into
base: main
Choose a base branch
from

Conversation

sweep-nightly[bot]
Copy link
Contributor

@sweep-nightly sweep-nightly bot commented Aug 23, 2023

Description

This PR adds a benchmarking script for the modify_file function in the diff.py module. The benchmarking script is located at sweepai/utils/benchmark_modify_file.py. It allows for testing the performance of the modify_file function by parsing a file for the necessary context and measuring the execution time.

Summary of Changes

  • Created a new Python script benchmark_modify_file.py in the sweepai/utils directory.
  • Imported the necessary modules for benchmarking and the modify_file function from diff.py.
  • Defined the benchmark_modify_file function that takes a file path as an argument.
  • Opened and read the file using the provided file path.
  • Recorded the current time before and after calling the modify_file function to measure the execution time.
  • Printed the execution time of the modify_file function.

Please review and merge this PR to enable benchmarking of the modify_file function.

Fixes #1367.


To checkout this PR branch, run the following command in your terminal:

git checkout {pull_request.branch_name}

🎉 Latest improvements to Sweep:

  • Use Sweep Map to break large issues into smaller sub-issues, perfect for large tasks like "Sweep (map): migrate from React class components to function components"
  • Getting Sweep to format before committing! Check out Sweep Sandbox Configs to set it up.
  • We released a demo of our chunker, where you can find the corresponding blog and code.

💡 To get Sweep to edit this pull request, you can:

  • Leave a comment below to get Sweep to edit the entire PR
  • Leave a comment in the code will only modify the file
  • Edit the original issue to get Sweep to recreate the PR from scratch

@vercel
Copy link

vercel bot commented Aug 23, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
sweep-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 31, 2023 4:04am

@wwzeng1
Copy link
Contributor

wwzeng1 commented Aug 24, 2023

Using content from on_ticket.py, can you prefill the context needed to initialize SweepBot? You may need to use the repo as well, just put placeholders for now

@sweep-nightly
Copy link
Contributor Author

sweep-nightly bot commented Aug 24, 2023

Using content from on_ticket.py, can you prefill the context needed to initialize SweepBot? You may need to use the repo as well, just put placeholders for now

Hi @wwzeng1,

I decided to make the following changes:

File Path Proposed Changes
sweepai/utils/benchmark_modify_file.py Modify sweepai/utils/benchmark_modify_file.py with contents:
• Import the SweepContext class from sweepai.core.sweep_bot.
• Inside the benchmark_modify_file function, before initializing the SweepBot instance, create an instance of the SweepContext class. Pass the necessary arguments to the constructor to generate the context. Use placeholders for these arguments for now.
• Pass the generated context as an argument to the SweepBot constructor when initializing the SweepBot instance.

@wwzeng1
Copy link
Contributor

wwzeng1 commented Aug 24, 2023

Use

human_message = HumanMessagePrompt(
    repo_name=repo_name,
    issue_url=issue_url,
    username=username,
    repo_description=repo_description,
    title=title,
    summary=message_summary,
    snippets=snippets,
    tree=tree,
)

as part of the benchmarking

@sweep-nightly
Copy link
Contributor Author

sweep-nightly bot commented Aug 24, 2023

Use

human_message = HumanMessagePrompt(
    repo_name=repo_name,
    issue_url=issue_url,
    username=username,
    repo_description=repo_description,
    title=title,
    summary=message_summary,
    snippets=snippets,
    tree=tree,
)

as part of the benchmarking

Hi @wwzeng1,

I decided to make the following changes:

File Path Proposed Changes
sweepai/utils/benchmark_modify_file.py Modify sweepai/utils/benchmark_modify_file.py with contents:
• Import the HumanMessagePrompt class from sweepai.utils.prompt_constructor.
• Before the start_time is recorded, create a HumanMessagePrompt object with placeholder values for all the required parameters. For example, you can use empty strings for repo_name, issue_url, username, repo_description, title, and summary. For snippets and tree, you can use empty lists.
• Ensure that the creation of the HumanMessagePrompt object is within the time measurement for the benchmarking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sweep Assigns Sweep to an issue or pull request.
Projects
None yet
3 participants