-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Product Pull Request] fix: increase grading rounding precision to avoid incorrect grades #238
Comments
Thanks for your submission, @openedx/open-edx-project-managers will review shortly. |
Information from the original PR Enabling the rounding in #16837 has been causing noticeable (up to 1 percentage point) differences between non-rounded subsection grades and a total grade for a course. This increases the grade precision to reduce the negative implications of double rounding. JiraSandboxhttps://pr27788.sandbox.opencraft.hosting/ Testing instructions
ExplanationThis course contains two subsections graded as
In the current approach, the grades of subsections are rounded. Therefore, in this case, we're getting This PR changes these calculations to DeadlineNone. Reviewers
Other informationPrivate-ref: BB-4210 |
Additional information from the author: Hi @ProductRyan, thank you for checking. That's correct - the subsection grades calculated by the platform are inaccurate because of the double rounding. Therefore, we want to increase the rounding precision to produce correct results. |
Pending edX platform wide review as of 2/23/23 |
@jmakowski1123, just checking on the status of this. Is there anything we can do to help moving this forward? |
|
@Agrendalath sorry for the long wait here, but we want to make sure we nail this cross-platform once and for all. The Aurora team is starting to open up some capacity and @Daniel-hershel is working to build out a roadmap now that he's been on board for a few months. I've given Daniel all the details and he's working to incorporate it into his plans. |
MY 2Cent, This change can probably lead to performance issue, in a nutshell to be comre accurate you need more compute time, thus more cost more relevant to high scale Below is a script that it shows it would add around 30% time when grades where 100K in a row. Other un related issue is what would happen if instructor run compute grade by mistake of a certfication that has been already issued... this probably might not be related depening on the platform certifcation policy. Lastly the pattern it seems to get something in which is controversial, is just to wrap it around a feature flag, which might not be alway the prefect solution but it be comers more releavnt path if the end goal in the frist place is just to get it to be in as quickly as possible from datetime import datetime
from numpy import around
def do_100000_round(decimals):
for i in range(100000):
around(34.37625 / 100 ,decimals)
# without decimal
t_befre_2 = datetime.now()
do_100000_round(decimals=2)
t_after_2 = datetime.now()
t_befre_4 = datetime.now()
do_100000_round(decimals=4)
t_after_4 = datetime.now()
result_2 = t_after_2 - t_befre_2
result_4 = t_after_4 - t_befre_4
print(f"result for 2 round {result_2 }\n")
print(f"result for 4 round {result_4 }\n")
print(f"Increase of run time { (1 - around( result_4 / result_2, decimals=4) ) * 100} % ") |
@ghassanmas, I ran this locally, and the result was
Yes, I already mentioned this in the PR:
The current grades generated by Open edX are inaccurate, though. It could be better to call this a
Well, this PR (openedx/edx-platform#27788) has been in product review since May 31, 2021... |
I need to give a another look, it also definilty would vary from machine to machine, intel vs ARM...etc, it was just one guess.
Yes It was suffienct, I didn't meant to remention it, but rather there might be probably cases where an insturctor need to consider what those in case: where a student had a passing grade and then after this changes they get fail or vis-versa, I speculate there might be buecray dpeending on the program, org author..etc of the a course. Again I am just speculating |
Hi @Daniel-hershel! Just checking to see if there's any update on this? Thanks! |
@mphilbrick211 hi! So we definitely recognize the need to address grade rounding in some way, so appreciate this work. For this (or any rounding solution) to work it would need to be implemented holistically system wide, which would include places like:
So for this solution to be accepted from a Product perspective it would have to represent a holistic implementation. The issues with rounding is also on my radar as a potential place the Aurora team will make an investment, and I can keep you posted on those developments as well. |
Thanks, @Daniel-hershel! |
Hi @Daniel-hershel - following-up to see if there's any update here? |
@mphilbrick211 Let's move this PR to draft mode until we can find an owner to drive product specs for a platform-wide solution to rounding issues in grading. |
@jmakowski1123, @mphilbrick211, is there something we can do to help move this forward? |
@Agrendalath thanks for the ping! I'm checking internally with Jenna on your questions. |
Hi @Agrendalath . Since the implementation of this would need to take into consideration a broader platform perspective, this will need to go through the Product Review process, documented here. Per Daniel's comment (which I agree with), does your team have the bandwidth to take on defining a spec and implementing with the wider scope in mind? |
@jmakowski1123, according to this comment, the value calculated in the |
That sounds right. Axim can pick up the Credentials analysis. |
@jmakowski1123 FYI the Credentials IDA should be a dumb consumer of Grades. If I had my way it would be a string and not even a number. We only need it to display the grade on the certificate and we are in no way an authority on grades. There are many Credentials bugs that have come from trying to be too smart about grades. The credentials app within edx-platform likely has more squishy boundaries, but again it is not a good thing. Just wanted to caution about any approaches that involve pushing grade precision knowledge further into the credentialing apps. I believe Ed and Glib and other credentials enthusiasts would agree with this but we can discuss if needed. |
@jmakowski1123, we have analyzed the impact of this change on the learner experience and opened openedx/frontend-app-learning#1397 to match the calculated weighted grades of individual assignment types (on the Progress page) with the scores calculated by the LMS. The original PR (openedx/edx-platform#27788) did not require any further changes. The only learner-facing change is that the grades will be displayed with increased precision on the legacy Progress page when a learner hovers over the subsections on the progress graph. Regarding the Gradebook, after this change (in the edx-platform), the subsection scores will have increased precision. For example, instead of 67%, it will be 66.67%. I'm not sure if this is intended, but this line indicates that these scores should indeed be rounded to two decimals. We can change it to hide the decimal points or keep it as-is to make the scores more verbose for the Instructors. What do you think? |
@jmakowski1123, this is ready for the product review. cc: @mphilbrick211, @itsjeyd |
Hi @jmakowski1123, do we know who will be leading the product review process as coordinator here? |
CC @ali-hugo ⬆️ |
For Contributing Author:
This is the Primary Product Ticket for the following community contribution: increase grading rounding precision to avoid incorrect grades
Checklist prior to undergoing Product Review:
The following information is required in order for Product Managers to be able to review your pull request:
Only if necessary:
Related PRs
For Product Manager doing the review:
What criteria should be analyzed from Product to approve a PR?
The text was updated successfully, but these errors were encountered: